Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, December 3, 2023

ChatGPT one year on: who is using it, how and why?

Ghassemi, M., Birhane, A., et al.
Nature 624, 39-41 (2023)
doi: https://doi.org/10.1038/d41586-023-03798-6

Here is an excerpt:

More pressingly, text and image generation are prone to societal biases that cannot be easily fixed. In health care, this was illustrated by Tessa, a rule-based chatbot designed to help people with eating disorders, run by a US non-profit organization. After it was augmented with generative AI, the now-suspended bot gave detrimental advice. In some US hospitals, generative models are being used to manage and generate portions of electronic medical records. However, the large language models (LLMs) that underpin these systems are not giving medical advice and so do not require clearance by the US Food and Drug Administration. This means that it’s effectively up to the hospitals to ensure that LLM use is fair and accurate. This is a huge concern.

The use of generative AI tools, in general and in health settings, needs more research with an eye towards social responsibility rather than efficiency or profit. The tools are flexible and powerful enough to make billing and messaging faster — but a naive deployment will entrench existing equity issues in these areas. Chatbots have been found, for example, to recommend different treatments depending on a patient’s gender, race and ethnicity and socioeconomic status (see J. Kim et al. JAMA Netw. Open 6, e2338050; 2023).

Ultimately, it is important to recognize that generative models echo and extend the data they have been trained on. Making generative AI work to improve health equity, for instance by using empathy training or suggesting edits that decrease biases, is especially important given how susceptible humans are to convincing, and human-like, generated texts. Rather than taking the health-care system we have now and simply speeding it up — with the risk of exacerbating inequalities and throwing in hallucinations — AI needs to target improvement and transformation.

Here is my summary:

The article on ChatGPT's one-year anniversary presents a comprehensive analysis of its usage, exploring the diverse user base, applications, and underlying motivations driving its adoption. It reveals that ChatGPT has found traction across a wide spectrum of users, including writers, developers, students, professionals, and hobbyists. This broad appeal can be attributed to its adaptability in assisting with a myriad of tasks, from generating creative content to aiding in coding challenges and providing language translation support.

The analysis further dissects how users interact with ChatGPT, showcasing distinct patterns of utilization. Some users leverage it for brainstorming ideas, drafting content, or generating creative writing, while others turn to it for programming assistance, using it as a virtual coding companion. Additionally, the article explores the strategies users employ to enhance the model's output, such as providing more context or breaking down queries into smaller parts.  There are still issues with biases, inaccurate information, and inappropriate uses.