Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label generative AI. Show all posts
Showing posts with label generative AI. Show all posts

Tuesday, April 1, 2025

Why Most Resist AI Companions

De Freitas, J., et al. (2025).
(Working Paper No. 25–030).

Abstract

Chatbots are now able to form emotional relationships with people and alleviate loneliness—a growing public health concern. Behavioral research provides little insight into whether everyday people are likely to use these applications and why. We address this question by focusing on the context of “AI companion” applications, designed to provide people with synthetic interaction partners. Study 1 shows that people believe AI companions are more capable than human companions in advertised respects relevant to relationships (being more available and nonjudgmental). Even so, they view them as incapable of realizing the underlying values of relationships, like mutual caring, judging them as not ‘true’ relationships. Study 2 provides further insight into this belief: people believe relationships with AI companions are one-sided
(rather than mutual), because they see AI as incapable of understanding and feeling emotion. Study 3 finds that actually interacting with an AI companion increases acceptance by changing beliefs about the AI’s advertised capabilities, but not about its ability to achieve the true values of relationships, demonstrating the resilience of this belief against intervention. In short, despite the potential loneliness-reducing benefits of AI companions, we uncover fundamental psychological barriers to adoption, suggesting these benefits will not be easily realized.

Here are some thoughts:

The research explores why people remain reluctant to adopt AI companions, despite the growing public health crisis of loneliness and the promise that AI might offer support. Through a series of studies, the authors identify deep-seated psychological barriers to embracing AI as a substitute or supplement for human connection. Specifically, people tend to view AI companions as fundamentally incapable of embodying the core features of meaningful relationships—such as mutual care, genuine emotional understanding, and shared experiences. While participants often acknowledged some of the practical benefits of AI companionship, such as constant availability and non-judgmental interaction, they consistently doubted that AI could offer authentic or reciprocal relationships. Even when people interacted directly with AI systems, their impressions of the AI’s functional abilities improved, but their skepticism around the emotional and relational authenticity of AI companions remained firmly in place. These findings suggest that the resistance is not merely technological or unfamiliarity-based, but rooted in beliefs about what makes relationships "real."

For psychologists, this research is particularly important because it sheds light on how people conceptualize emotional connection, authenticity, and support—core concerns in both clinical and social psychology. As mental health professionals increasingly confront issues of social isolation, understanding the limitations of AI in replicating genuine human connection is critical. Psychologists might be tempted to view AI companions as possible interventions for loneliness, especially for individuals who are socially isolated or homebound. However, this paper underscores that unless these deep psychological barriers are acknowledged and addressed, such tools may be met with resistance or prove insufficient in fulfilling emotional needs. Furthermore, the study contributes to a broader understanding of human-technology relationships, offering insights into how people emotionally and cognitively differentiate between human and artificial agents. This knowledge is crucial for designing future interventions, therapeutic tools, and technologies that are sensitive to the human need for authenticity, reciprocity, and emotional depth in relationships.

Monday, February 10, 2025

Consent and Compensation: Resolving Generative AI’s Copyright Crisis

Pasquale, F., & Sun, H. (2024).
SSRN Electronic Journal.

Abstract

Generative artificial intelligence (AI) has the potential to augment and democratize creativity. However, it is undermining the knowledge ecosystem that now sustains it. Generative AI may unfairly compete with creatives, displacing them in the market. Most AI firms are not compensating creative workers for composing the songs, drawing the images, and writing both the fiction and non-fiction books that their models need in order to function. AI thus threatens not only to undermine the livelihoods of authors, artists, and other creatives, but also to destabilize the very knowledge ecosystem it relies on.

Alarmed by these developments, many copyright owners have objected to the use of their works by AI providers. To recognize and empower their demands to stop non-consensual use of their works, we propose a streamlined opt-out mechanism that would require AI providers to remove objectors’ works from their databases once copyright infringement has been documented. Those who do not object still deserve compensation for the use of their work by AI providers. We thus also propose a levy on AI providers, to be distributed to the copyright owners whose work they use without a license. This scheme is designed to ensure creatives receive a fair share of the economic bounty arising out of their contributions to AI. Together these mechanisms of consent and compensation would result in a new grand bargain between copyright owners and AI firms, designed to ensure both thrive in the long-term.

Here are some thoughts.

This essay discusses the copyright challenges presented by generative artificial intelligence (AI). It argues that AI's ability to create content and replicate existing works threatens the livelihoods of authors and other creatives, destabilizing the knowledge ecosystem that AI relies on. The authors propose a legislative solution involving an opt-out mechanism that would allow copyright owners to remove their works from AI training databases and a levy on AI providers to compensate copyright owners whose work is used without a license.

The essay emphasizes the urgency of addressing the issue, asserting that the free use of copyrighted works by AI providers devalues human creativity and could undermine AI's future development by removing incentives for creating the training data it needs. It highlights the disruption of the knowledge ecosystem caused by the opacity and scale of AI systems, which erodes authors' control over their works. The authors point out that AI firms are unlikely to offer compensation for the use of copyrighted works.

Ultimately, the essay advocates for a new agreement between copyright owners and AI firms, facilitated by the proposed mechanisms of consent and compensation. This would ensure the long-term viability of both AI and the human creative input it depends on. The authors believe that their proposed framework offers a promising legislative solution to the copyright problems created by new technological uses of works.

Wednesday, January 22, 2025

Cognitive biases and artificial intelligence.

Wang, J., & Redelmeier, D. A. (2024).
NEJM AI, 1(12).

Abstract

Generative artificial intelligence (AI) models are increasingly utilized for medical applications. We tested whether such models are prone to human-like cognitive biases when offering medical recommendations. We explored the performance of OpenAI generative pretrained transformer (GPT)-4 and Google Gemini-1.0-Pro with clinical cases that involved 10 cognitive biases and system prompts that created synthetic clinician respondents. Medical recommendations from generative AI were compared with strict axioms of rationality and prior results from clinicians. We found that significant discrepancies were apparent for most biases. For example, surgery was recommended more frequently for lung cancer when framed in survival rather than mortality statistics (framing effect: 75% vs. 12%; P<0.001). Similarly, pulmonary embolism was more likely to be listed in the differential diagnoses if the opening sentence mentioned hemoptysis rather than chronic obstructive pulmonary disease (primacy effect: 100% vs. 26%; P<0.001). In addition, the same emergency department treatment was more likely to be rated as inappropriate if the patient subsequently died rather than recovered (hindsight bias: 85% vs. 0%; P<0.001). One exception was base-rate neglect that showed no bias when interpreting a positive viral screening test (correction for false positives: 94% vs. 93%; P=0.431). The extent of these biases varied minimally with the characteristics of synthetic respondents, was generally larger than observed in prior research with practicing clinicians, and differed between generative AI models. We suggest that generative AI models display human-like cognitive biases and that the magnitude of bias can be larger than observed in practicing clinicians.

Here are some thoughts:

The research explores how AI systems, trained on human-generated data, often replicate cognitive biases such as confirmation bias, representation bias, and anchoring bias. These biases arise from flawed data, algorithmic design, and human interactions, resulting in inequitable outcomes in areas like recruitment, criminal justice, and healthcare. To address these challenges, the authors propose several strategies, including ensuring diverse and inclusive datasets, enhancing algorithmic transparency, fostering interdisciplinary collaboration among ethicists, developers, and legislators, and establishing regulatory frameworks that prioritize fairness, accountability, and privacy. They emphasize that while biases in AI reflect human cognitive tendencies, they have the potential to exacerbate societal inequalities if left unchecked. A holistic approach combining technological solutions with ethical and regulatory oversight is necessary to create AI systems that are equitable and socially beneficial.

This topic connects deeply to ethics, values, and psychology. Ethically, the replication of biases in AI challenges principles of fairness, justice, and equity, highlighting the need for responsible innovation that aligns AI systems with societal values to avoid perpetuating systemic discrimination. Psychologically, the biases in AI reflect human cognitive shortcuts, such as heuristics, which, while useful for individual decision-making, can lead to harmful outcomes when embedded into AI systems. By leveraging insights from psychology to identify and mitigate these biases, and grounding AI development in ethical principles, society can create technology that is both advanced and aligned with humanistic values.

Monday, September 23, 2024

Generative AI Can Harm Learning

Bastani, H. et al. (July 15, 2024).
Available at SSRN:

Abstract

Generative artificial intelligence (AI) is poised to revolutionize how humans work, and has already demonstrated promise in significantly improving human productivity. However, a key remaining question is how generative AI affects learning, namely, how humans acquire new skills as they perform tasks. This kind of skill learning is critical to long-term productivity gains, especially in domains where generative AI is fallible and human experts must check its outputs. We study the impact of generative AI, specifically OpenAI's GPT-4, on human learning in the context of math classes at a high school. In a field experiment involving nearly a thousand students, we have deployed and evaluated two GPT based tutors, one that mimics a standard ChatGPT interface (called GPT Base) and one with prompts designed to safeguard learning (called GPT Tutor). These tutors comprise about 15% of the curriculum in each of three grades. Consistent with prior work, our results show that access to GPT-4 significantly improves performance (48% improvement for GPT Base and 127% for GPT Tutor). However, we additionally find that when access is subsequently taken away, students actually perform worse than those who never had access (17% reduction for GPT Base). That is, access to GPT-4 can harm educational outcomes. These negative learning effects are largely mitigated by the safeguards included in GPT Tutor. Our results suggest that students attempt to use GPT-4 as a "crutch" during practice problem sessions, and when successful, perform worse on their own. Thus, to maintain long-term productivity, we must be cautious when deploying generative AI to ensure humans continue to learn critical skills.


Here are some thoughts:

The deployment of GPT-based tutors in educational settings presents a cautionary tale. While generative AI tools like ChatGPT can make tasks significantly easier for humans, they also risk deteriorating our ability to effectively learn essential skills. This phenomenon is not new, as previous technologies like typing and calculators have also reduced the need for certain skills. However, ChatGPT's broader intellectual capabilities and propensity for providing incorrect responses make it unique.

Unlike earlier technologies, ChatGPT's unreliability and tendency to provide incorrect responses pose significant challenges. Students may struggle to detect these errors or be unwilling to invest the effort required to verify the accuracy of ChatGPT's responses. This can negatively impact their learning and understanding of critical skills. The text suggests that more work is needed to ensure generative AI enhances education rather than diminishes it.

The findings underscore the importance of critical thinking and media literacy in the age of AI. Educators must be aware of the potential risks and benefits of AI-powered tools and design them to augment human capabilities rather than replace them. Accountability and transparency in AI development and deployment are crucial to mitigating these risks. By acknowledging these challenges, we can harness the potential of AI to enhance education and promote meaningful learning.

Wednesday, June 26, 2024

Can Generative AI improve social science?

Bail, C. A. (2024).
Proceedings of the National Academy of
Sciences of the United States of America, 121(21). 

Abstract

Generative AI that can produce realistic text, images, and other human-like outputs is currently transforming many different industries. Yet it is not yet known how such tools might influence social science research. I argue Generative AI has the potential to improve survey research, online experiments, automated content analyses, agent-based models, and other techniques commonly used to study human behavior. In the second section of this article, I discuss the many limitations of Generative AI. I examine how bias in the data used to train these tools can negatively impact social science research—as well as a range of other challenges related to ethics, replication, environmental impact, and the proliferation of low-quality research. I conclude by arguing that social scientists can address many of these limitations by creating open-source infrastructure for research on human behavior. Such infrastructure is not only necessary to ensure broad access to high-quality research tools, I argue, but also because the progress of AI will require deeper understanding of the social forces that guide human behavior.

Here is a brief summary:

Generative AI, with its ability to produce realistic text, images, and data, has the potential to significantly impact social science research.  This article explores both the exciting possibilities and potential pitfalls of this new technology.

On the positive side, generative AI could streamline data collection and analysis, making social science research more efficient and allowing researchers to explore new avenues. For example, AI-powered surveys could be more engaging and lead to higher response rates. Additionally, AI could automate tasks like content analysis, freeing up researchers to focus on interpretation and theory building.

However, there are also ethical considerations. AI models can inherit and amplify biases present in the data they're trained on. This could lead to skewed research findings that perpetuate social inequalities. Furthermore, the opaqueness of some AI models can make it difficult to understand how they arrive at their conclusions, raising concerns about transparency and replicability in research.

Overall, generative AI offers a powerful tool for social scientists, but it's crucial to be mindful of the ethical implications and limitations of this technology. Careful development and application are essential to ensure that AI enhances, rather than hinders, our understanding of human behavior.

Tuesday, May 28, 2024

How should we change teaching and assessment in response to increasingly powerful generative Artificial Intelligence?

Bower, M., Torrington, J., Lai, J.W.M. et al.
Educ Inf Technol (2024).

Abstract

There has been widespread media commentary about the potential impact of generative Artificial Intelligence (AI) such as ChatGPT on the Education field, but little examination at scale of how educators believe teaching and assessment should change as a result of generative AI. This mixed methods study examines the views of educators (n = 318) from a diverse range of teaching levels, experience levels, discipline areas, and regions about the impact of AI on teaching and assessment, the ways that they believe teaching and assessment should change, and the key motivations for changing their practices. The majority of teachers felt that generative AI would have a major or profound impact on teaching and assessment, though a sizeable minority felt it would have a little or no impact. Teaching level, experience, discipline area, region, and gender all significantly influenced perceived impact of generative AI on teaching and assessment. Higher levels of awareness of generative AI predicted higher perceived impact, pointing to the possibility of an ‘ignorance effect’. Thematic analysis revealed the specific curriculum, pedagogy, and assessment changes that teachers feel are needed as a result of generative AI, which centre around learning with AI, higher-order thinking, ethical values, a focus on learning processes and face-to-face relational learning. Teachers were most motivated to change their teaching and assessment practices to increase the performance expectancy of their students and themselves. We conclude by discussing the implications of these findings in a world with increasingly prevalent AI.


Here is a quick summary:

A recent study surveyed teachers about the impact of generative AI, like ChatGPT, on education. The majority of teachers believed AI would significantly change how they teach and assess students. Interestingly, teachers with more awareness of AI predicted a greater impact, suggesting a potential "ignorance effect."

The study also explored how teachers think education should adapt. The focus shifted towards teaching students how to learn with AI, emphasizing critical thinking, ethics, and the learning process itself. This would involve less emphasis on rote memorization and regurgitation of information that AI can readily generate. Teachers also highlighted the importance of maintaining strong face-to-face relationships with students in this evolving educational landscape.

Saturday, May 25, 2024

AI Chatbots Will Never Stop Hallucinating

Lauren Leffer
Scientific American
Originally published 5 April 24

Here is an excerpt:

Hallucination is usually framed as a technical problem with AI—one that hardworking developers will eventually solve. But many machine-learning experts don’t view hallucination as fixable because it stems from LLMs doing exactly what they were developed and trained to do: respond, however they can, to user prompts. The real problem, according to some AI researchers, lies in our collective ideas about what these models are and how we’ve decided to use them. To mitigate hallucinations, the researchers say, generative AI tools must be paired with fact-checking systems that leave no chatbot unsupervised.

Many conflicts related to AI hallucinations have roots in marketing and hype. Tech companies have portrayed their LLMs as digital Swiss Army knives, capable of solving myriad problems or replacing human work. But applied in the wrong setting, these tools simply fail. Chatbots have offered users incorrect and potentially harmful medical advice, media outlets have published AI-generated articles that included inaccurate financial guidance, and search engines with AI interfaces have invented fake citations. As more people and businesses rely on chatbots for factual information, their tendency to make things up becomes even more apparent and disruptive.

But today’s LLMs were never designed to be purely accurate. They were created to create—to generate—says Subbarao Kambhampati, a computer science professor who researches artificial intelligence at Arizona State University. “The reality is: there’s no way to guarantee the factuality of what is generated,” he explains, adding that all computer-generated “creativity is hallucination, to some extent.”


Here is my summary:

AI chatbots like ChatGPT and Bing's AI assistant frequently "hallucinate" - they generate false or misleading information and present it as fact. This is a major problem as more people turn to these AI tools for information, research, and decision-making.

Hallucinations occur because AI models are trained to predict the most likely next word or phrase, not to reason about truth and accuracy. They simply produce plausible-sounding responses, even if they are completely made up.

This issue is inherent to the current state of large language models and is not easily fixable. Researchers are working on ways to improve accuracy and reliability, but there will likely always be some rate of hallucination.

Hallucinations can have serious consequences when people rely on chatbots for sensitive information related to health, finance, or other high-stakes domains. Experts warn these tools should not be used where factual accuracy is critical.

Saturday, July 22, 2023

Generative AI companies must publish transparency reports

A. Narayanan and S. Kapoor
Knight First Amendment Institute
Originally published 26 June 23

Here is an excerpt:

Transparency reports must cover all three types of harms from AI-generated content

There are three main types of harms that may result from model outputs.

First, generative AI tools could be used to harm others, such as by creating non-consensual deepfakes or child sexual exploitation materials. Developers do have policies that prohibit such uses. For example, OpenAI's policies prohibit a long list of uses, including the use of its models to generate unauthorized legal, financial, or medical advice for others. But these policies cannot have real-world impact if they are not enforced, and due to platforms' lack of transparency about enforcement, we have no idea if they are effective. Similar challenges in ensuring platform accountability have also plagued social media in the past; for instance, ProPublica reporters repeatedly found that Facebook failed to fully remove discriminatory ads from its platform despite claiming to have done so.

Sophisticated bad actors might use open-source tools to generate content that harms others, so enforcing use policies can never be a comprehensive solution. In a recent essay, we argued that disinformation is best addressed by focusing on its distribution (e.g., on social media) rather than its generation. Still, some actors will use tools hosted in the cloud either due to convenience or because the most capable models don’t tend to be open-source. For these reasons, transparency is important for cloud-based generative AI.

Second, users may over-rely on AI for factual information, such as legal, financial, or medical advice. Sometimes they are simply unaware of the tendency of current chatbots to frequently generate incorrect information. For example, a user might ask "what are the divorce laws in my state?" and not know that the answer is unreliable. Alternatively, the user might be harmed because they weren’t careful enough to verify the generated information, despite knowing that it might be inaccurate. Research on automation bias shows that people tend to over-rely on automated tools in many scenarios, sometimes making more errors than when not using the tool.

ChatGPT includes a disclaimer that it sometimes generates inaccurate information. But OpenAI has often touted its performance on medical and legal exams. And importantly, the tool is often genuinely useful at medical diagnosis or legal guidance. So, regardless of whether it’s a good idea to do so, people are in fact using it for these purposes. That makes harm reduction important, and transparency is an important first step.

Third, generated content could be intrinsically undesirable. Unlike the previous types, here the harms arise not because of users' malice, carelessness, or lack of awareness of limitations. Rather, intrinsically problematic content is generated even though it wasn’t requested. For example, Lensa's avatar creation app generated sexualized images and nudes when women uploaded their selfies. Defamation is also intrinsically harmful rather than a matter of user responsibility. It is no comfort to the target of defamation to say that the problem would be solved if every user who might encounter a false claim about them were to exercise care to verify it.


Quick summary: 

The call for transparency reports aims to increase accountability and understanding of the inner workings of generative AI models. By disclosing information about the data used to train the models, the companies can address concerns regarding potential biases and ensure the ethical use of their technology.

Transparency reports could include details about the sources and types of data used, the demographics represented in the training data, any data augmentation techniques applied, and potential biases detected or addressed during model development. This information would enable users, policymakers, and researchers to evaluate the capabilities and limitations of the generative AI systems.