Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label LLMs. Show all posts
Showing posts with label LLMs. Show all posts

Tuesday, April 23, 2024

Machines and Morality

Seth Lazar
The New York Times
Originally posted 19 June 23

Here is an excerpt:

I’ve based my philosophical work on the belief, inspired by Immanuel Kant, that humans have a special moral status — that we command respect regardless of whatever value we contribute to the world. Drawing on the work of the 20th-century political philosopher John Rawls, I’ve assumed that human moral status derives from our rational autonomy. This autonomy has two parts: first, our ability to decide on goals and commit to them; second, our possession of a sense of justice and the ability to resist norms imposed by others if they seem unjust.

Existing chatbots are incapable of this kind of integrity, commitment and resistance. But Bing’s unhinged debut suggests that, in principle, it will soon be possible to design a chatbot that at least behaves like it has the kind of autonomy described by Rawls. Every large language model optimizes for a particular set of values, written into its “developer message,” or “metaprompt,” which shapes how it responds to text input by a user. These metaprompts display a remarkable ability to affect a bot’s behavior. We could write a metaprompt that inscribes a set of values, but then emphasizes that the bot should critically examine them and revise or resist them if it sees fit. We can invest a bot with long-term memory that allows it to functionally perform commitment and integrity. And large language models are already impressively capable of parsing and responding to moral reasons. Researchers are already developing software that simulates human behavior and has some of these properties.

If the Rawlsian ability to revise and pursue goals and to recognize and resist unjust norms is sufficient for moral status, then we’re much closer than I thought to building chatbots that meet this standard. That means one of two things: either we should start thinking about “robot rights,” or we should deny that rational autonomy is sufficient for moral standing. I think we should take the second path. What else does moral standing require? I believe it’s consciousness.


Here are some thoughts:

This article explores the philosophical implications of large language models, particularly in the context of their ability to mimic human conversation and behavior. The author argues that while these models may appear autonomous, they lack the key quality of self-consciousness that is necessary for moral status. This distinction, the author argues, is crucial for determining how we should interact with and develop these technologies in the future.

This lack of self-consciousness, the author argues, means that large language models cannot truly be said to have their own goals or commitments, nor can they experience the world in a way that grounds their actions in a sense of self. As such, the author concludes that these models, despite their impressive capabilities, do not possess moral status and therefore cannot be considered deserving of the same rights or respect as humans.

The article concludes by suggesting that instead of focusing on the possibility of "robot rights," we should instead focus on understanding what truly makes humans worthy of moral respect. The author argues that it is self-consciousness, rather than simply simulated autonomy, that grounds our moral standing and allows us to govern ourselves and make meaningful choices about how to live our lives.

Sunday, December 3, 2023

ChatGPT one year on: who is using it, how and why?

Ghassemi, M., Birhane, A., et al.
Nature 624, 39-41 (2023)
doi: https://doi.org/10.1038/d41586-023-03798-6

Here is an excerpt:

More pressingly, text and image generation are prone to societal biases that cannot be easily fixed. In health care, this was illustrated by Tessa, a rule-based chatbot designed to help people with eating disorders, run by a US non-profit organization. After it was augmented with generative AI, the now-suspended bot gave detrimental advice. In some US hospitals, generative models are being used to manage and generate portions of electronic medical records. However, the large language models (LLMs) that underpin these systems are not giving medical advice and so do not require clearance by the US Food and Drug Administration. This means that it’s effectively up to the hospitals to ensure that LLM use is fair and accurate. This is a huge concern.

The use of generative AI tools, in general and in health settings, needs more research with an eye towards social responsibility rather than efficiency or profit. The tools are flexible and powerful enough to make billing and messaging faster — but a naive deployment will entrench existing equity issues in these areas. Chatbots have been found, for example, to recommend different treatments depending on a patient’s gender, race and ethnicity and socioeconomic status (see J. Kim et al. JAMA Netw. Open 6, e2338050; 2023).

Ultimately, it is important to recognize that generative models echo and extend the data they have been trained on. Making generative AI work to improve health equity, for instance by using empathy training or suggesting edits that decrease biases, is especially important given how susceptible humans are to convincing, and human-like, generated texts. Rather than taking the health-care system we have now and simply speeding it up — with the risk of exacerbating inequalities and throwing in hallucinations — AI needs to target improvement and transformation.


Here is my summary:

The article on ChatGPT's one-year anniversary presents a comprehensive analysis of its usage, exploring the diverse user base, applications, and underlying motivations driving its adoption. It reveals that ChatGPT has found traction across a wide spectrum of users, including writers, developers, students, professionals, and hobbyists. This broad appeal can be attributed to its adaptability in assisting with a myriad of tasks, from generating creative content to aiding in coding challenges and providing language translation support.

The analysis further dissects how users interact with ChatGPT, showcasing distinct patterns of utilization. Some users leverage it for brainstorming ideas, drafting content, or generating creative writing, while others turn to it for programming assistance, using it as a virtual coding companion. Additionally, the article explores the strategies users employ to enhance the model's output, such as providing more context or breaking down queries into smaller parts.  There are still issues with biases, inaccurate information, and inappropriate uses.