Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Artificial Intelligence. Show all posts
Showing posts with label Artificial Intelligence. Show all posts

Friday, March 1, 2024

AI needs the constraints of the human brain

Danyal Akarca
iai.tv
Originally posted 30 Jan 24

Here is an excerpt:

So, evolution shapes systems that are capable of solving competing problems that are both internal (e.g., how to expend energy) and external (e.g., how to act to survive), but in a way that can be highly efficient, in many cases elegant, and often surprising. But how does this evolutionary story of biological intelligence contrast with the current paradigm of AI?

In some ways, quite directly. Since the 50s, neural networks were developed as models that were inspired directly from neurons in the brain and the strength of their connections, in addition to many successful architectures of the past being directly motivated by neuroscience experimentation and theory. Yet, AI research in the modern era has occurred with a significant absence of thought of intelligent systems in nature and their guiding principles. Why is this? There are many reasons. But one is that the exponential growth of computing capabilities, enabled by increases of transistors on integrated circuits (observed since the 1950s, known as Moore’s Law), has permitted AI researchers to leverage significant improvements in performance without necessarily requiring extraordinarily elegant solutions. This is not to say that modern AI algorithms are not widely impressive – they are. It is just that the majority of the heavy lifting has come from advances in computing power rather than their engineered design. Consequently, there has been relatively little recent need or interest from AI experts to look to the brain for inspiration.

But the tide is turning. From a hardware perspective, Moore’s law will not continue ad infinitum (at 7 nanometers, transistor channel lengths are now nearing fundamental limits of atomic spacing). We will therefore not be able to leverage ever improving performance delivered by increasingly compact microprocessors. It is likely therefore that we will require entirely new computing paradigms, some of which may be inspired by the types of computations we observe in the brain (the most notable being neuromorphic computing). From a software and AI perspective, it is becoming increasingly clear that – in part due to the reliance on increases to computational power – the AI research field will need to refresh its conceptions as to what makes systems intelligent at all. For example, this will require much more sophisticated benchmarks of what it means to perform at human or super-human performance. In sum, the field will need to form a much richer view of the possible space of intelligent systems, and how artificial models can occupy different places in that space.


Key Points:
  • Evolutionary pressures: Efficient, resource-saving brains are advantageous for survival, leading to optimized solutions for learning, memory, and decision-making.
  • AI's reliance on brute force: Modern AI often achieves performance through raw computing power, neglecting principles like energy efficiency.
  • Shifting AI paradigm: Moore's Law's end and limitations in conventional AI call for exploration of new paradigms, potentially inspired by the brain.
  • Neurobiology's potential: Brain principles like network structure, local learning, and energy trade-offs can inform AI design for efficiency and novel functionality.
  • Embodied AI with constraints: Recent research incorporates space and communication limitations into AI models, leading to features resembling real brains and potentially more efficient information processing.

Tuesday, February 27, 2024

Robot, let us pray! Can and should robots have religious functions? An ethical exploration of religious robots

Puzio, A.
AI & Soc (2023).
https://doi.org/10.1007/s00146-023-01812-z

Abstract

Considerable progress is being made in robotics, with robots being developed for many different areas of life: there are service robots, industrial robots, transport robots, medical robots, household robots, sex robots, exploration robots, military robots, and many more. As robot development advances, an intriguing question arises: should robots also encompass religious functions? Religious robots could be used in religious practices, education, discussions, and ceremonies within religious buildings. This article delves into two pivotal questions, combining perspectives from philosophy and religious studies: can and should robots have religious functions? Section 2 initiates the discourse by introducing and discussing the relationship between robots and religion. The core of the article (developed in Sects. 3 and 4) scrutinizes the fundamental questions: can robots possess religious functions, and should they? After an exhaustive discussion of the arguments, benefits, and potential objections regarding religious robots, Sect. 5 addresses the lingering ethical challenges that demand attention. Section 6 presents a discussion of the findings, outlines the limitations of this study, and ultimately responds to the dual research question. Based on the study’s results, brief criteria for the development and deployment of religious robots are proposed, serving as guidelines for future research. Section 7 concludes by offering insights into the future development of religious robots and potential avenues for further research.


Summary

Can robots fulfill religious functions? The article explores the technical feasibility of designing robots that could engage in religious practices, education, and ceremonies. It acknowledges the current limitations of robots, particularly their lack of sentience and spiritual experience. However, it also suggests potential avenues for development, such as robots equipped with advanced emotional intelligence and the ability to learn and interpret religious texts.

Should robots fulfill religious functions? This is where the ethical debate unfolds. The article presents arguments both for and against. On the one hand, robots could potentially offer various benefits, such as increasing accessibility to religious practices, providing companionship and spiritual guidance, and even facilitating interfaith dialogue. On the other hand, concerns include the potential for robotization of faith, the blurring of lines between human and machine in the context of religious experience, and the risk of reinforcing existing biases or creating new ones.

Ultimately, the article concludes that there is no easy answer to the question of whether robots should have religious functions. It emphasizes the need for careful consideration of the ethical implications and ongoing dialogue between religious communities, technologists, and ethicists. This ethical exploration paves the way for further research and discussion as robots continue to evolve and their potential roles in society expand.

Tuesday, February 20, 2024

Understanding Liability Risk from Using Health Care Artificial Intelligence Tools

Mello, M. M., & Guha, N. (2024).
The New England journal of medicine, 390(3), 271–278. https://doi.org/10.1056/NEJMhle2308901

Optimism about the explosive potential of artificial intelligence (AI) to transform medicine is tempered by worry about what it may mean for the clinicians being "augmented." One question is especially problematic because it may chill adoption: when Al contributes to patient injury, who will be held responsible?

Some attorneys counsel health care organizations with dire warnings about liability1 and dauntingly long lists of legal concerns. Unfortunately, liability concern can lead to overly conservative decisions, including reluctance to try new things. Yet, older forms of clinical decision support provided important opportunities to prevent errors and malpractice claims. Given the slow progress in reducing diagnostic errors, not adopting new tools also has consequences and at some point may itself become malpractice. Liability uncertainty also affects Al developers' cost of capital and incentives to develop particular products, thereby influencing which Al innovations become available and at what price.

To help health care organizations and physicians weigh Al-related liability risk against the benefits of adoption, we examine the issues that courts have grappled with in cases involving software error and what makes them so challenging. Because the signals emerging from case law remain somewhat faint, we conducted further analysis of the aspects of Al tools that elevate or mitigate legal risk. Drawing on both analyses, we provide risk-management recommendations, focusing on the uses of Al in direct patient care with a "human in the loop" since the use of fully autonomous systems raises additional issues.

(cut)

The Awkward Adolescence of Software-Related Liability

Legal precedent regarding Al injuries is rare because Al models are new and few personal-injury claims result in written opinions. As this area of law matures, it will confront several challenges.

Challenges in Applying Tort Law Principles to Health Care Artificial Intelligence (AI).

Ordinarily, when a physician uses or recommends a product and an injury to the patient results, well-established rules help courts allocate liability among the physician, product maker, and patient. The liabilities of the physician and product maker are derived from different standards of care, but for both kinds of defendants, plaintiffs must show that the defendant owed them a duty, the defendant breached the applicable standard of care, and the breach caused their injury; plaintiffs must also rebut any suggestion that the injury was so unusual as to be outside the scope of liability.

The article is paywalled, which is not how this should work.

Saturday, February 3, 2024

How to Navigate the Pitfalls of AI Hype in Health Care

Suran M, Hswen Y.
JAMA.
Published online January 03, 2024.

What is AI snake oil, and how might it hinder progress within the medical field? What are the inherent risks in AI-driven automation for patient care, and how can we ensure the protection of sensitive patient information while maximizing its benefits?

When it comes to using AI in medicine, progress is important—but so is proceeding with caution, says Arvind Narayanan, PhD, a professor of computer science at Princeton University, where he directs the Center for Information Technology Policy.


Here is my summary:
  • AI has the potential to revolutionize healthcare, but it is important to be aware of the hype and potential pitfalls.
  • One of the biggest concerns is bias. AI algorithms can be biased based on the data they are trained on, which can lead to unfair or inaccurate results. For example, an AI algorithm that is trained on data from mostly white patients may be less accurate at diagnosing diseases in black patients.
  • Another concern is privacy. AI algorithms require large amounts of data to work, and this data can be very sensitive. It is important to make sure that patient data is protected and that patients have control over how their data is used.
  • It is also important to remember that AI is not a magic bullet. AI can be a valuable tool, but it is not a replacement for human judgment. Doctors and other healthcare professionals need to be able to critically evaluate the output of AI algorithms and make sure that it is being used in a safe and ethical way.
Overall, the interview is a cautionary tale about the potential dangers of AI in healthcare. It is important to be aware of the risks and to take steps to mitigate them. But it is also important to remember that AI has the potential to do a lot of good in healthcare. If we develop and use AI responsibly, it can help us to improve the quality of care for everyone.

Here are some additional points that were made in the interview:
  • AI can be used to help with a variety of tasks in healthcare, such as diagnosing diseases, developing new treatments, and managing chronic conditions.
  • There are a number of different types of AI, each with its own strengths and weaknesses.
  • It is important to choose the right type of AI for the task at hand.
  • AI should always be used in conjunction with human judgment.

Friday, February 2, 2024

Young people turning to AI therapist bots

Joe Tidy
BBC.com
Originally posted 4 Jan 24

Here is an excerpt:

Sam has been so surprised by the success of the bot that he is working on a post-graduate research project about the emerging trend of AI therapy and why it appeals to young people. Character.ai is dominated by users aged 16 to 30.

"So many people who've messaged me say they access it when their thoughts get hard, like at 2am when they can't really talk to any friends or a real therapist,"
Sam also guesses that the text format is one with which young people are most comfortable.
"Talking by text is potentially less daunting than picking up the phone or having a face-to-face conversation," he theorises.

Theresa Plewman is a professional psychotherapist and has tried out Psychologist. She says she is not surprised this type of therapy is popular with younger generations, but questions its effectiveness.

"The bot has a lot to say and quickly makes assumptions, like giving me advice about depression when I said I was feeling sad. That's not how a human would respond," she said.

Theresa says the bot fails to gather all the information a human would and is not a competent therapist. But she says its immediate and spontaneous nature might be useful to people who need help.
She says the number of people using the bot is worrying and could point to high levels of mental ill health and a lack of public resources.


Here are some important points-

Reasons for appeal:
  • Cost: Traditional therapy's expense and limited availability drive some towards bots, seen as cheaper and readily accessible.
  • Stigma: Stigma associated with mental health might make bots a less intimidating first step compared to human therapists.
  • Technology familiarity: Young people, comfortable with technology, find text-based interaction with bots familiar and less daunting than face-to-face sessions.
Concerns and considerations:
  • Bias: Bots trained on potentially biased data might offer inaccurate or harmful advice, reinforcing existing prejudices.
  • Qualifications: Lack of professional mental health credentials and oversight raises concerns about the quality of support provided.
  • Limitations: Bots aren't replacements for human therapists. Complex issues or severe cases require professional intervention.

Wednesday, December 27, 2023

This algorithm could predict your health, income, and chance of premature death

Holly Barker
Science.org
Originally published 18 DEC 23

Here is an excerpt:

The researchers trained the model, called “life2vec,” on every individual’s life story between 2008 to 2016, and the model sought patterns in these stories. Next, they used the algorithm to predict whether someone on the Danish national registers had died by 2020.

The model’s predictions were accurate 78% of the time. It identified several factors that favored a greater risk of premature death, including having a low income, having a mental health diagnosis, and being male. The model’s misses were typically caused by accidents or heart attacks, which are difficult to predict.

Although the results are intriguing—if a bit grim—some scientists caution that the patterns might not hold true for non-Danish populations. “It would be fascinating to see the model adapted using cohort data from other countries, potentially unveiling universal patterns, or highlighting unique cultural nuances,” says Youyou Wu, a psychologist at University College London.

Biases in the data could also confound its predictions, she adds. (The overdiagnosis of schizophrenia among Black people could cause algorithms to mistakenly label them at a higher risk of premature death, for example.) That could have ramifications for things such as insurance premiums or hiring decisions, Wu adds.


Here is my summary:

A new algorithm, trained on a mountain of Danish life stories, can peer into your future with unsettling precision. It can predict your health, income, and even your odds of an early demise. This, achieved by analyzing the sequence of life events, like getting a job or falling ill, raises both possibilities and ethical concerns.

On one hand, imagine the potential for good: nudges towards healthier habits or financial foresight, tailored to your personal narrative. On the other, anxieties around bias and discrimination loom. We must ensure this powerful tool is used wisely, for the benefit of all, lest it exacerbate existing inequalities or create new ones. The algorithm’s gaze into the future, while remarkable, is just that – a glimpse, not a script. 

Saturday, December 23, 2023

Folk Psychological Attributions of Consciousness to Large Language Models

Colombatto, C., & Fleming, S. M.
(2023, November 22). PsyArXiv

Abstract

Technological advances raise new puzzles and challenges for cognitive science and the study of how humans think about and interact with artificial intelligence (AI). For example, the advent of Large Language Models and their human-like linguistic abilities has raised substantial debate regarding whether or not AI could be conscious. Here we consider the question of whether AI could have subjective experiences such as feelings and sensations (“phenomenological consciousness”). While experts from many fieldshave weighed in on this issue in academic and public discourse, it remains unknown how the general population attributes phenomenology to AI. We surveyed a sample of US residents (N=300) and found that a majority of participants were willing to attribute phenomenological consciousness to LLMs. These attributions were robust, as they predicted attributions of mental states typically associated with phenomenology –but also flexible, as they were sensitive to individual differences such as usage frequency. Overall, these results show how folk intuitions about AI consciousness can diverge from expert intuitions –with important implications for the legal and ethical status of AI.


My summary:

The results of the study show that people are generally more likely to attribute consciousness to LLMs than to other non-human entities, such as animals, plants, and robots. However, the level of consciousness attributed to LLMs is still relatively low, with most participants rating them as less conscious than humans. The authors argue that these findings reflect the influence of folk psychology, which is the tendency to explain the behavior of others in terms of mental states.

The authors also found that people's attributions of consciousness to LLMs were influenced by their beliefs about the nature of consciousness and their familiarity with LLMs. Participants who were more familiar with LLMs were more likely to attribute consciousness to them, and participants who believed that consciousness is a product of complex computation were also more likely to attribute consciousness to LLMs.

Overall, the study suggests that people are generally open to the possibility that LLMs may be conscious, but they also recognize that LLMs are not as conscious as humans. These findings have implications for the development and use of LLMs, as they suggest that people may be more willing to trust and interact with LLMs that they believe are conscious.

Saturday, November 18, 2023

Resolving the battle of short- vs. long-term AI risks

Sætra, H.S., Danaher, J.
AI Ethics (2023).

Abstract

AI poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to think two thoughts at the same time. While disagreements over the exact probabilities and impacts of risks will remain, fostering a more productive dialogue will be important. This entails, for example, distinguishing between evaluations of particular risks and the politics of risk. Without proper discussions of AI risk, it will be difficult to properly manage them, and we could end up in a situation where neither short- nor long-term risks are managed and mitigated.


Here is my summary:

Artificial intelligence (AI) poses both short- and long-term risks, but the AI ethics and regulatory communities are struggling to agree on how to prioritize these risks. Some argue that short-term risks, such as bias and discrimination, are more pressing and should be addressed first, while others argue that long-term risks, such as the possibility of AI surpassing human intelligence and becoming uncontrollable, are more serious and should be prioritized.

Sætra and Danaher argue that it is important to consider both short- and long-term risks when developing AI policies and regulations. They point out that short-term risks can have long-term consequences, and that long-term risks can have short-term impacts. For example, if AI is biased against certain groups of people, this could lead to long-term inequality and injustice. Conversely, if we take steps to mitigate long-term risks, such as by developing safety standards for AI systems, this could also reduce short-term risks.

Sætra and Danaher offer a number of suggestions for how to better balance short- and long-term AI risks. One suggestion is to develop a risk matrix that categorizes risks by their impact and likelihood. This could help policymakers to identify and prioritize the most important risks. Another suggestion is to create a research agenda that addresses both short- and long-term risks. This would help to ensure that we are investing in the research that is most needed to keep AI safe and beneficial.

Friday, November 17, 2023

Humans feel too special for machines to score their morals

Purcell, Z. A., & Jean‐François Bonnefon. (2023).
PNAS Nexus, 2(6).

Abstract

Artificial intelligence (AI) can be harnessed to create sophisticated social and moral scoring systems—enabling people and organizations to form judgments of others at scale. However, it also poses significant ethical challenges and is, subsequently, the subject of wide debate. As these technologies are developed and governing bodies face regulatory decisions, it is crucial that we understand the attraction or resistance that people have for AI moral scoring. Across four experiments, we show that the acceptability of moral scoring by AI is related to expectations about the quality of those scores, but that expectations about quality are compromised by people's tendency to see themselves as morally peculiar. We demonstrate that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI.‌

Significance Statement

The potential use of artificial intelligence (AI) to create sophisticated social and moral scoring systems poses significant ethical challenges. To inform the regulation of this technology, it is critical that we understand the attraction or resistance that people have for AI moral scoring. This project develops that understanding across four empirical studies—demonstrating that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI.

The link to the research is above.

My summary:

Here is another example of "myside bias" in which humans base decisions based on their uniqueness or better than average hypothesis.  This research study investigated whether people would accept AI moral scoring systems. The study found that people are unlikely to accept such systems, in large part because they feel too special for machines to score their personal morals.

Specifically, the results showed that people were more likely to accept AI moral scoring systems if they believed that the systems were accurate. However, even if people believed that the systems were accurate, they were still less likely to accept them if they believed that they were morally unique.

The study's authors suggest that these findings may be due to the fact that people have a strong need to feel unique and special. They also suggest that people may be hesitant to trust AI systems to accurately assess their moral character.

Key findings:
  • People are unlikely to accept AI moral scoring systems, in large part because they feel too special for machines to score their personal morals.
  • People's willingness to accept AI moral scoring is influenced by two factors: their perceived accuracy of the system and their belief that they are morally unique.
  • People are more likely to accept AI moral scoring systems if they believe that the systems are accurate. However, even if people believe that the systems are accurate, they are still less likely to accept them if they believe that they are morally unique.

Tuesday, October 31, 2023

Which Humans?

Atari, M., Xue, M. J.et al.
(2023, September 22).
https://doi.org/10.31234/osf.io/5b26t

Abstract

Large language models (LLMs) have recently made vast advances in both generating and analyzing textual data. Technical reports often compare LLMs’ outputs with “human” performance on various tests. Here, we ask, “Which humans?” Much of the existing literature largely ignores the fact that humans are a cultural species with substantial psychological diversity around the globe that is not fully captured by the textual data on which current LLMs have been trained. We show that LLMs’ responses to psychological measures are an outlier compared with large-scale cross-cultural data, and that their performance on cognitive psychological tasks most resembles that of people from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies but declines rapidly as we move away from these populations (r = -.70). Ignoring cross-cultural diversity in both human and machine psychology raises numerous scientific and ethical issues. We close by discussing ways to mitigate the WEIRD bias in future generations of generative language models.

My summary:

The authors argue that much of the existing literature on LLMs largely ignores the fact that humans are a cultural species with substantial psychological diversity around the globe. This diversity is not fully captured by the textual data on which current LLMs have been trained.

For example, LLMs are often evaluated on their ability to complete tasks such as answering trivia questions, generating creative text formats, and translating languages. However, these tasks are all biased towards the cultural context of the data on which the LLMs were trained. This means that LLMs may perform well on these tasks for people from certain cultures, but poorly for people from other cultures.

Atari and his co-authors argue that it is important to be aware of this bias when interpreting the results of LLM evaluations. They also call for more research on the performance of LLMs across different cultures and demographics.

One specific example they give is the use of LLMs to generate creative text formats, such as poems and code. They argue that LLMs that are trained on a dataset of text from English-speaking countries are likely to generate creative text that is more culturally relevant to those countries. This could lead to bias and discrimination against people from other cultures.

Atari and his co-authors conclude by calling for more research on the following questions:
  • How do LLMs perform on different tasks across different cultures and demographics?
  • How can we develop LLMs that are less biased towards the cultural context of their training data?
  • How can we ensure that LLMs are used in a way that is fair and equitable for all people?

Wednesday, October 25, 2023

The moral psychology of Artificial Intelligence

Bonnefon, J., Rahwan, I., & Shariff, A.
(2023, September 22). 

Abstract

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in Artificial Intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients, or solving moral dilemmas without human supervi- sion. Machines can be as perceived moral patients, whose outcomes can be affected by human decisions, with important consequences for human-machine cooperation. Machines can be moral proxies, that human agents and patients send as their delegates to a moral interaction, or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

Conclusion

We have not addressed every issue at the intersection of AI and moral psychology. Questions about how people perceive AI plagiarism, about how the presence of AI agents can reduce or enhance trust between groups of humans, about how sexbots will alter intimate human relations, are the subjects of active research programs.  Many more yet unasked questions will only be provoked as new AI  abilities  develops. Given the pace of this change, any review paper will only be a snapshot.  Nevertheless, the very recent and rapid emergence of AI-driven technology is colliding with moral intuitions forged by culture and evolution over the span of millennia.  Grounding an imaginative speculation about the possibilities of AI with a thorough understanding of the structure of human moral psychology will help prepare for a world shared with, and complicated by, machines.

Sunday, October 8, 2023

Moral Uncertainty and Our Relationships with Unknown Minds

Danaher, J. (2023). 
Cambridge Quarterly of Healthcare Ethics, 
32(4), 482-495.
doi:10.1017/S0963180123000191

Abstract

We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral decision rules that allow us to either minimize the risks of moral wrongdoing or improve the choice-worthiness of our actions. One particular argument adopted in this literature is the “risk asymmetry argument,” which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. Focusing on the case study of artificial beings, this article argues that this is best understood as an ethical-epistemic challenge. The article argues that taking potential risk asymmetries seriously can help resolve disputes about the status of human–AI relationships, at least in practical terms (philosophical debates will, no doubt, continue); however, the resolution depends on a proper, empirically grounded assessment of the risks involved. Being skeptical about basic moral status, but more open to the possibility of meaningful relationships with such entities, may be the most sensible approach to take.


My take: 

John Danaher explores the ethical challenges of interacting with entities whose moral status is uncertain, such as artificial beings, animals, and patients with locked-in syndrome. Danaher argues that this is best understood as an ethical-epistemic challenge, and that we need to develop meta-moral decision rules that allow us to minimize the risks of moral wrongdoing or improve the choiceworthiness of our actions.

One particular argument that Danaher adopts is the "risk asymmetry argument," which claims that the risks associated with accepting or rejecting some moral facts may be sufficiently asymmetrical as to warrant favoring a particular practical resolution of this uncertainty. In the context of human-AI relationships, Danaher argues that it is more prudent to err on the side of caution and treat AI systems as if they have moral standing, even if we are not sure whether they actually do. This is because the potential risks of mistreating AI systems, such as creating social unrest or sparking an arms race, are much greater than the potential risks of treating them too respectfully.

Danaher acknowledges that this approach may create some tension in our moral views, as it suggests that we should be skeptical about the basic moral status of AI systems, but more open to the possibility of meaningful relationships with them. However, he argues that this is the most sensible approach to take, given the ethical-epistemic challenges that we face.

Saturday, October 7, 2023

AI systems must not confuse users about their sentience or moral status

Schwitzgebel, E. (2023).
Patterns, 4(8), 100818.
https://doi.org/10.1016/j.patter.2023.100818 

The bigger picture

The draft European Union Artificial Intelligence Act highlights the seriousness with which policymakers and the public have begun to take issues in the ethics of artificial intelligence (AI). Scientists and engineers have been developing increasingly more sophisticated AI systems, with recent breakthroughs especially in large language models such as ChatGPT. Some scientists and engineers argue, or at least hope, that we are on the cusp of creating genuinely sentient AI systems, that is, systems capable of feeling genuine pain and pleasure. Ordinary users are increasingly growing attached to AI companions and might soon do so in much greater numbers. Before long, substantial numbers of people might come to regard some AI systems as deserving of at least some limited rights or moral standing, being targets of ethical concern for their own sake. Given high uncertainty both about the conditions under which an entity can be sentient and about the proper grounds of moral standing, we should expect to enter a period of dispute and confusion about the moral status of our most advanced and socially attractive machines.

Summary

One relatively neglected challenge in ethical artificial intelligence (AI) design is ensuring that AI systems invite a degree of emotional and moral concern appropriate to their moral standing. Although experts generally agree that current AI chatbots are not sentient to any meaningful degree, these systems can already provoke substantial attachment and sometimes intense emotional responses in users. Furthermore, rapid advances in AI technology could soon create AIs of plausibly debatable sentience and moral standing, at least by some relevant definitions. Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated. I argue here that, to the extent possible, we should avoid creating AI systems whose sentience or moral standing is unclear and that AI systems should be designed so as to invite appropriate emotional responses in ordinary users.

My take

The article proposes two design policies for avoiding morally confusing AI systems. The first is to create systems that are clearly non-conscious artifacts. This means that the systems should be designed in a way that makes it clear to users that they are not sentient beings. The second policy is to create systems that are clearly deserving of moral consideration as sentient beings. This means that the systems should be designed to have the same moral status as humans or other animals.

The article concludes that the best way to avoid morally confusing AI systems is to err on the side of caution and create systems that are clearly non-conscious artifacts. This is because it is less risky to underestimate the sentience of an AI system than to overestimate it.

Here are some additional points from the article:
  • The scientific study of sentience is highly contentious, and there is no agreed-upon definition of what it means for an entity to be sentient.
  • Rapid advances in AI technology could soon create AI systems that are plausibly debatable as sentient.
  • Morally confusing AI systems create unfortunate ethical dilemmas for the owners and users of those systems, since it is unclear how those systems ethically should be treated.
  • The design of AI systems should be guided by ethical considerations, such as the need to avoid causing harm and the need to respect the dignity of all beings.

Tuesday, October 3, 2023

Emergent analogical reasoning in large language models

Webb, T., Holyoak, K.J. & Lu, H. 
Nat Hum Behav (2023).
https://doi.org/10.1038/s41562-023-01659-w

Abstract

The recent advent of large language models has reinvigorated debate over whether human cognitive capacities might emerge in such generic models given sufficient training data. Of particular interest is the ability of these models to reason about novel problems zero-shot, without any direct training. In human cognition, this capacity is closely tied to an ability to reason by analogy. Here we performed a direct comparison between human reasoners and a large language model (the text-davinci-003 variant of Generative Pre-trained Transformer (GPT)-3) on a range of analogical tasks, including a non-visual matrix reasoning task based on the rule structure of Raven’s Standard Progressive Matrices. We found that GPT-3 displayed a surprisingly strong capacity for abstract pattern induction, matching or even surpassing human capabilities in most settings; preliminary tests of GPT-4 indicated even better performance. Our results indicate that large language models such as GPT-3 have acquired an emergent ability to find zero-shot solutions to a broad range of analogy problems.

Discussion

We have presented an extensive evaluation of analogical reasoning in a state-of-the-art large language model. We found that GPT-3 appears to display an emergent ability to reason by analogy, matching or surpassing human performance across a wide range of problem types. These included a novel text-based problem set (Digit Matrices) modeled closely on Raven’s Progressive Matrices, where GPT-3 both outperformed human participants, and captured a number of specific signatures of human behavior across problem types. Because we developed the Digit Matrix task specifically for this evaluation, we can be sure GPT-3 had never been exposed to problems of this type, and therefore was performing zero-shot reasoning. GPT-3 also displayed an ability to solve analogies based on more meaningful relations, including four-term verbal analogies and analogies between stories about naturalistic problems.

It is certainly not the case that GPT-3 mimics human analogical reasoning in all respects. Its performance is limited to the processing of information provided in its local context. Unlike humans, GPT-3 does not have long-term memory for specific episodes. It is therefore unable to search for previously-encountered situations that might create useful analogies with a current problem. For example, GPT-3 can use the general story to guide its solution to the radiation problem, but as soon as its context buffer is emptied, it reverts to giving its non-analogical solution to the problem – the system has learned nothing from processing the analogy. GPT-3’s reasoning ability is also limited by its lack of physical understanding of the world, as evidenced by its failure (in comparison with human children) to use an analogy to solve a transfer problem involving construction and use of simple tools. GPT-3’s difficulty with this task is likely due at least in part to its purely text-based input, lacking the multimodal experience necessary to build a more integrated world model.

But despite these major caveats, our evaluation reveals that GPT-3 exhibits a very general capacity to identify and generalize – in zero-shot fashion – relational patterns to be found within both formal problems and meaningful texts. These results are extremely surprising. It is commonly held that although neural networks can achieve a high level of performance within a narrowly-defined task domain, they cannot robustly generalize what they learn to new problems in the way that human learners do. Analogical reasoning is typically viewed as a quintessential example of this human capacity for abstraction and generalization, allowing human reasoners to intelligently approach novel problems zero-shot.

Wednesday, September 27, 2023

Property ownership and the legal personhood of artificial intelligence

Brown, R. D. (2020).
Information & Communications Technology Law, 
30(2), 208–234.


Abstract

This paper adds to the discussion on the legal personhood of artificial intelligence by focusing on one area not covered by previous works on the subject – ownership of property. The author discusses the nexus between property ownership and legal personhood. The paper explains the prevailing misconceptions about the requirements of rights or duties in legal personhood, and discusses the potential for conferring rights or imposing obligations on weak and strong AI. While scholars have discussed AI owning real property and copyright, there has been limited discussion on the nexus of AI property ownership and legal personhood. The paper discusses the right to own property and the obligations of property ownership in nonhumans, and applying it to AI. The paper concludes that the law may grant property ownership and legal personhood to weak AI, but not to strong AI.

From the Conclusion

This article proposes an analysis of legal personhood that focuses on rights and duties. In doing so, the article looks to property ownership, which raises both requirements. Property ownership is certainly only one type of legal right, which also includes the right to sue or be sued, or legal standing, and the right to contract.Footnote195 Property ownership, however, is a key feature of AI since it relies mainly on arguably the most valuable property today: data.

It is unlikely that governments and legislators will suddenly recognise in one event AI’s ownership of property and AI’s legal personhood. Rather, acceptance of AI’s legal personhood, as with the acceptance of a corporate personhood will develop as a process and in stages, in parallel to the development of legal personhood. At first, AI will be deemed as a tool and not have the right to own property. This is the most common conception of AI today. Second, AI will be deemed as an agent, and upon updating existing agency law to include AI as a person for purposes of agency, then AI will also be allowed to own property as an agent in the same agency ownership arrangement that Rothenberg proposes. While AI already acts as de facto agent in many circumstances today through electronic contracts, most governments and legislators have not recognised AI as an agent. The laws of many countries like Qatar still defines an agent as a person, which upon strict interpretation would not include AI or an electronic agent. This is an existing gap in the laws that will likely create legal challenges in the near future.

However, as AI develops its ability to communicate and assert more autonomy, then AI will come to own all sorts of digital assets. At first, AI will likely possess and control property in conjunction with human action and decisions. Examples would be the use of AI in money laundering, or hiding digital assets by placing them within the control and possession of an AI. In some instances, AI will have possession and control of property unknown or unforeseen by humans.

If AI is seen as separate from data, as the software that processes and interprets data for various purposes, self-learns from the data, makes autonomous decisions, and predicts human behaviour and decisions, then there could come a time when society will view AI as separate from data. Society may come to view AI not as the object (the data) but that which manipulates, controls, and possesses data and digital property.

Brief Summary:

Granting property ownership to AI is a complex one that raises a number of legal and ethical challenges. The author suggests that further research is needed to explore these challenges and to develop a framework for granting property ownership to AI in a way that is both legally sound and ethically justifiable.

Monday, September 18, 2023

Property ownership and the legal personhood of artificial intelligence

Rafael Dean Brown (2021) 
Information & Communications Technology Law, 
30:2, 208-234. 
DOI: 10.1080/13600834.2020.1861714

Abstract

This paper adds to the discussion on the legal personhood of artificial intelligence by focusing on one area not covered by previous works on the subject – ownership of property. The author discusses the nexus between property ownership and legal personhood. The paper explains the prevailing misconceptions about the requirements of rights or duties in legal personhood, and discusses the potential for conferring rights or imposing obligations on weak and strong AI. While scholars have discussed AI owning real property and copyright, there has been limited discussion on the nexus of AI property ownership and legal personhood. The paper discusses the right to own property and the obligations of property ownership in nonhumans, and applying it to AI. The paper concludes that the law may grant property ownership and legal personhood to weak AI, but not to strong AI.

(cut)

Persona ficta and juristic person

The concepts of persona ficta and juristic person, as distinct from a natural person, trace its origins to early attempts at giving legal rights to a group of men acting in concert. While the concept of persona ficta has its roots from Roman law, ecclesiastical lawyers expanded upon it during the Middle Ages. Savigny is now credited for bringing the concept into modern legal thought. A persona ficta, under Roman law principles, could not exist unless under some ‘creative act’ of a legislative body – the State. According to Deiser, however, the concept of a persona ficta during the Middle Ages was insufficient to give it the full extent of rights associated with the modern concept of legal personhood, particularly, property ownership and the recovery of property, that is, without invoking the right of an individual member. It also could not receive state-granted rights, could not occupy a definite position within a community that is distinct from its separate members, and it could not sue or be sued. In other words, persona ficta has historically required the will of the individual human member for the conferral of rights.

(cut)

In other words, weak AI, regardless of whether it is supervised or unsupervised ultimately would have to rely on some sort of intervention from its human programmer to exercise property rights. If anything, weak AI is more akin to an infant requiring guardianship, more so than a river or an idol, mainly because the weak AI functions in reliance on the human programmer’s code and data. A weak AI in possession and control of property could arguably be conferred the right to own property subject to a human agent acting on its behalf as a guardian. In this way, the law could grant a weak AI legal personhood based on its exercise of property rights in the same way that the law granted legal personhood to a corporation, river, or an idol. The law would attribute the will of the human programmer to the weak AI.

The question of whether a strong AI, if it were to become a reality, should also be granted legal personhood based on its exercise of the right to own property is altogether a different inquiry. Strong AI could theoretically take actual or constructive possession of property, and therefore exercise property rights independently the way a human would, and even in more advanced ways.Footnote151 However, a strong AI’s independence and autonomy implies that it could have the ability to assert and exercise property rights beyond the control of laws and human beings. This would be problematic to our current notions of property ownership and social order.Footnote152 In this way, the fear of a strong AI with unregulated possession of property is real, and bolsters the argument in favor of human-centred and explainable AI that requires human intervention.


My summary:

The author discusses the prevailing misconceptions about the requirements of rights or duties in legal personhood. He argues that the ability to own property is not a necessary condition for legal personhood. For example, corporations and trusts are legal persons, but they cannot own property in their own name.

The author then considers the potential for conferring rights or imposing obligations on weak and strong AI. He argues that weak AI, which is capable of limited reasoning and decision-making, may be granted property ownership and legal personhood. This is because weak AI can be held responsible for its actions and can be expected to uphold the obligations of property ownership.

Strong AI, on the other hand, is capable of independent thought and action. The author argues that it is not clear whether strong AI can be held responsible for its actions or whether it can be expected to uphold the obligations of property ownership. Therefore, he concludes that the law may not grant property ownership and legal personhood to strong AI.

The author's argument is based on the assumption that legal personhood is a necessary condition for property ownership. However, there is no consensus on this assumption. Some legal scholars argue that property ownership is a sufficient condition for legal personhood, meaning that anything that can own property is a legal person.

The question of whether AI can own property is a complex one that is likely to be debated for many years to come. The article "Property ownership and the legal personhood of artificial intelligence" provides a thoughtful and nuanced discussion of this issue.

Thursday, September 7, 2023

AI Should Be Terrified of Humans

Brian Kateman
Time.com
Originally posted 24 July 23

Here are two excerpts:

Humans have a pretty awful track record for how we treat others, including other humans. All manner of exploitation, slavery, and violence litters human history. And today, billions upon billions of animals are tortured by us in all sorts of obscene ways, while we ignore the plight of others. There’s no quick answer to ending all this suffering. Let’s not wait until we’re in a similar situation with AI, where their exploitation is so entrenched in our society that we don’t know how to undo it. If we take for granted starting right now that maybe, just possibly, some forms of AI are or will be capable of suffering, we can work with the intention to build a world where they don’t have to.

(cut)

Today, many scientists and philosophers are looking at the rise of artificial intelligence from the other end—as a potential risk to humans or even humanity as a whole. Some are raising serious concerns over the encoding of social biases like racism and sexism into computer programs, wittingly or otherwise, which can end up having devastating effects on real human beings caught up in systems like healthcare or law enforcement. Others are thinking earnestly about the risks of a digital-being-uprising and what we need to do to make sure we’re not designing technology that will view humans as an adversary and potentially act against us in one way or another. But more and more thinkers are rightly speaking out about the possibility that future AI should be afraid of us.

“We rationalize unmitigated cruelty toward animals—caging, commodifying, mutilating, and killing them to suit our whims—on the basis of our purportedly superior intellect,” Marina Bolotnikova writes in a recent piece for Vox. “If sentience in AI could ever emerge…I’m doubtful we’d be willing to recognize it, for the same reason that we’ve denied its existence in animals.” Working in animal protection, I’m sadly aware of the various ways humans subjugate and exploit other species. Indeed, it’s not only our impressive reasoning skills, our use of complex language, or our ability to solve difficult problems and introspect that makes us human; it’s also our unparalleled ability to increase non-human suffering. Right now there’s no reason to believe that we aren’t on a path to doing the same thing to AI. Consider that despite our moral progress as a species, we torture more non-humans today than ever before. We do this not because we are sadists, but because even when we know individual animals feel pain, we derive too much profit and pleasure from their exploitation to stop.


Wednesday, September 6, 2023

Could a Large Language Model Be Conscious?

David Chalmers
Boston Review
Originally posted 9 Aug 23

Here are two excerpts:

Consciousness also matters morally. Conscious systems have moral status. If fish are conscious, it matters how we treat them. They’re within the moral circle. If at some point AI systems become conscious, they’ll also be within the moral circle, and it will matter how we treat them. More generally, conscious AI will be a step on the path to human level artificial general intelligence. It will be a major step that we shouldn’t take unreflectively or unknowingly.

This gives rise to a second challenge: Should we create conscious AI? This is a major ethical challenge for the community. The question is important and the answer is far from obvious.

We already face many pressing ethical challenges about large language models. There are issues about fairness, about safety, about truthfulness, about justice, about accountability. If conscious AI is coming somewhere down the line, then that will raise a new group of difficult ethical challenges, with the potential for new forms of injustice added on top of the old ones. One issue is that conscious AI could well lead to new harms toward humans. Another is that it could lead to new harms toward AI systems themselves.

I’m not an ethicist, and I won’t go deeply into the ethical questions here, but I don’t take them lightly. I don’t want the roadmap to conscious AI that I’m laying out here to be seen as a path that we have to go down. The challenges I’m laying out in what follows could equally be seen as a set of red flags. Each challenge we overcome gets us closer to conscious AI, for better or for worse. We need to be aware of what we’re doing and think hard about whether we should do it.

(cut)

Where does the overall case for or against LLM consciousness stand?

Where current LLMs such as the GPT systems are concerned: I think none of the reasons for denying consciousness in these systems is conclusive, but collectively they add up. We can assign some extremely rough numbers for illustrative purposes. On mainstream assumptions, it wouldn’t be unreasonable to hold that there’s at least a one-in-three chance—that is, to have a subjective probability or credence of at least one-third—that biology is required for consciousness. The same goes for the requirements of sensory grounding, self models, recurrent processing, global workspace, and unified agency. If these six factors were independent, it would follow that there’s less than a one-in-ten chance that a system lacking all six, like a current paradigmatic LLM, would be conscious. Of course the factors are not independent, which drives the figure somewhat higher. On the other hand, the figure may be driven lower by other potential requirements X that we have not considered.

Taking all that into account might leave us with confidence somewhere under 10 percent in current LLM consciousness. You shouldn’t take the numbers too seriously (that would be specious precision), but the general moral is that given mainstream assumptions about consciousness, it’s reasonable to have a low credence that current paradigmatic LLMs such as the GPT systems are conscious.


Here are some of the key points from the article:
  1. There is no consensus on what consciousness is, so it is difficult to say definitively whether or not LLMs are conscious.
  2. Some people believe that consciousness requires carbon-based biology, but Chalmers argues that this is a form of biological chauvinism.  I agree with this completely. We can have synthetic forms of consciousness.
  3. Other people believe that LLMs are not conscious because they lack sensory processing or bodily embodiment. Chalmers argues that these objections are not decisive, but they do raise important questions about the nature of consciousness.
  4. Chalmers concludes by suggesting that we should take the possibility of LLM consciousness seriously, but that we should also be cautious about making definitive claims about it.

Monday, August 14, 2023

Artificial intelligence, superefficiency and the end of work: a humanistic perspective on meaning in life

Knell, S., & Rüther, M. (2023). 
AI and Ethics.

Abstract

How would it be assessed from an ethical point of view if human wage work were replaced by artificially intelligent systems (AI) in the course of an automation process? An answer to this question has been discussed above all under the aspects of individual well-being and social justice. Although these perspectives are important, in this article, we approach the question from a different perspective: that of leading a meaningful life, as understood in analytical ethics on the basis of the so-called meaning-in-life debate. Our thesis here is that a life without wage work loses specific sources of meaning, but can still be sufficiently meaningful in certain other ways. Our starting point is John Danaher’s claim that ubiquitous automation inevitably leads to an achievement gap. Although we share this diagnosis, we reject his provocative solution according to which game-like virtual realities could be an adequate substitute source of meaning. Subsequently, we outline our own systematic alternative which we regard as a decidedly humanistic perspective. It focuses both on different kinds of social work and on rather passive forms of being related to meaningful contents. Finally, we go into the limits and unresolved points of our argumentation as part of an outlook, but we also try to defend its fundamental persuasiveness against a potential objection.

Concluding remarks

In this article, we explored the question of how we can find meaning in a post-work world. Our answer relies on a critique of John Danaher’s utopia of games and tries to stick to the humanistic idea, namely to the idea that we do not have to alter our human lifeform in an extensive way and also can keep up our orientation towards common ideals, such as working towards the good, the true and the beautiful.

Our proposal still has some shortcomings, which include the following two that we cannot deal with extensively but at least want to briefly comment on. First, we assumed that certain professional fields, especially in the meaning conferring area of the good, cannot be automated, so that the possibility of mini-jobs in these areas can be considered.  This assumption is based on a substantial thesis from the
philosophy of mind, namely that AI systems cannot develop consciousness and consequently also no genuine empathy.  This assumption needs to be further elaborated, especially in view of some forecasts that even the altruistic and philanthropic professions are not immune to the automation of superefficient systems. Second, we have adopted without further critical discussion the premise of the hybrid standard model of a meaningful life according to which meaning conferring objective value is to be found in the three spheres of the true, the good, and the beautiful. We take this premise to be intuitively appealing, but a further elaboration of our argumentation would have to try to figure out, whether this trias is really exhaustive, and if so, due to which underlying more general principle.


Full transparency, big John Danaher fan.  Regardless, here is my summary:

Humans are meaning makers. We find meaning in our work, our relationships, and our engagement with the world. The article discusses the potential impact of AI on the meaning of work, and I agree that the authors make some good points. However, I think their solution is somewhat idealistic. It is true that social relationships and engagement with the world can provide us with meaning, but these activities will be difficult to achieve, especially in a world where AI is doing most of the work.  We will need ways to cooperate, achieve, and interact to engage in behaviors that are geared toward super-ordinate goals.  Humans need to align their lives with core human principles, such as meaning-making, pattern repetition, cooperation, and values-based behaviors.
  • The authors focus on the potential impact of AI on the meaning of work, but they also acknowledge that other factors, such as automation and globalization, are also having an impact.
  • The authors' solution is based on the idea that meaning comes from relationships and engagement with the world. However, there are other theories about the meaning of life, such as the idea that meaning comes from self-actualization or from religious faith.
  • The authors acknowledge that their solution is not perfect, but they argue that it is a better alternative than Danaher's solution. However, I think it is important to consider all of the options before deciding which one is best.  Ultimately, it will come down to a values-based decision, as there seems to be no one right or correct solution.

Saturday, July 8, 2023

Microsoft Scraps Entire Ethical AI Team Amid AI Boom

Lauren Leffer
gizmodo.com
Updated on March 14, 2023
Still relevant

Microsoft is currently in the process of shoehorning text-generating artificial intelligence into every single product that it can. And starting this month, the company will be continuing on its AI rampage without a team dedicated to internally ensuring those AI features meet Microsoft’s ethical standards, according to a Monday night report from Platformer.

Microsoft has scrapped its whole Ethics and Society team within the company’s AI sector, as part of ongoing layoffs set to impact 10,000 total employees, per Platformer. The company maintains its Office of Responsible AI, which creates the broad, Microsoft-wide principles to govern corporate AI decision making. But the ethics and society taskforce, which bridged the gap between policy and products, is reportedly no more.

Gizmodo reached out to Microsoft to confirm the news. In response, a company spokesperson sent the following statement:
Microsoft remains committed to developing and designing AI products and experiences safely and responsibly. As the technology has evolved and strengthened, so has our investment, which at times has meant adjusting team structures to be more effective. For example, over the past six years we have increased the number of people within our product teams who are dedicated to ensuring we adhere to our AI principles. We have also increased the scale and scope of our Office of Responsible AI, which provides cross-company support for things like reviewing sensitive use cases and advocating for policies that protect customers.

To Platformer, the company reportedly previously shared this slightly different version of the same statement:

Microsoft is committed to developing AI products and experiences safely and responsibly...Over the past six years we have increased the number of people across our product teams within the Office of Responsible AI who, along with all of us at Microsoft, are accountable for ensuring we put our AI principles into practice...We appreciate the trailblazing work the ethics and society team did to help us on our ongoing responsible AI journey.

Note that, in this older version, Microsoft does inadvertently confirm that the ethics and society team is no more. The company also previously specified staffing increases were in the Office of Responsible AI vs people generally “dedicated to ensuring we adhere to our AI principles.”

Yet, despite Microsoft’s reassurances, former employees told Platformer that the Ethics and Society team played a key role translating big ideas from the responsibility office into actionable changes at the product development level.

The info is here.