Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Antagonism. Show all posts
Showing posts with label Antagonism. Show all posts

Thursday, March 28, 2024

Antagonistic AI

A. Cai, I. Arawjo, E. L. Glassman
arXiv:2402.07350
Originally submitted 12 Feb 24

The vast majority of discourse around AI development assumes that subservient, "moral" models aligned with "human values" are universally beneficial -- in short, that good AI is sycophantic AI. We explore the shadow of the sycophantic paradigm, a design space we term antagonistic AI: AI systems that are disagreeable, rude, interrupting, confrontational, challenging, etc. -- embedding opposite behaviors or values. Far from being "bad" or "immoral," we consider whether antagonistic AI systems may sometimes have benefits to users, such as forcing users to confront their assumptions, build resilience, or develop healthier relational boundaries. Drawing from formative explorations and a speculative design workshop where participants designed fictional AI technologies that employ antagonism, we lay out a design space for antagonistic AI, articulating potential benefits, design techniques, and methods of embedding antagonistic elements into user experience. Finally, we discuss the many ethical challenges of this space and identify three dimensions for the responsible design of antagonistic AI -- consent, context, and framing.


Here is my summary:

This article proposes a thought-provoking concept: designing AI systems that intentionally challenge and disagree with users. It argues against the dominant view of AI as subservient and aligned with human values, instead exploring the potential benefits of "antagonistic AI" in stimulating critical thinking and challenging assumptions. While acknowledging the ethical concerns and proposing responsible design principles, the article could benefit from a deeper discussion of potential harms, concrete examples of how such AI might function, and how it would be received by users. Overall, "Antagonistic AI" is a valuable contribution that prompts further exploration and discussion on the responsible development and societal implications of such AI systems.

Thursday, May 3, 2018

Why Pure Reason Won’t End American Tribalism

Robert Wright
www.wired.com
Originally published April 9, 2018

Here is an excerpt:

Pinker also understands that cognitive biases can be activated by tribalism. “We all identify with particular tribes or subcultures,” he notes—and we’re all drawn to opinions that are favored by the tribe.

So far so good: These insights would seem to prepare the ground for a trenchant analysis of what ails the world—certainly including what ails an America now famously beset by political polarization, by ideological warfare that seems less and less metaphorical.

But Pinker’s treatment of the psychology of tribalism falls short, and it does so in a surprising way. He pays almost no attention to one of the first things that springs to mind when you hear the word “tribalism.” Namely: People in opposing tribes don’t like each other. More than Pinker seems to realize, the fact of tribal antagonism challenges his sunny view of the future and calls into question his prescriptions for dispelling some of the clouds he does see on the horizon.

I’m not talking about the obvious downside of tribal antagonism—the way it leads nations to go to war or dissolve in civil strife, the way it fosters conflict along ethnic or religious lines. I do think this form of antagonism is a bigger problem for Pinker’s thesis than he realizes, but that’s a story for another day. For now the point is that tribal antagonism also poses a subtler challenge to his thesis. Namely, it shapes and drives some of the cognitive distortions that muddy our thinking about critical issues; it warps reason.

The article is here.