Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Human Relationships. Show all posts
Showing posts with label Human Relationships. Show all posts

Saturday, September 2, 2023

Do AI girlfriend apps promote unhealthy expectations for human relationships?

Josh Taylor
The Guardian
Originally posted 21 July 23

Here is an excerpt:

When you sign up for the Eva AI app, it prompts you to create the “perfect partner”, giving you options like “hot, funny, bold”, “shy, modest, considerate” or “smart, strict, rational”. It will also ask if you want to opt in to sending explicit messages and photos.

“Creating a perfect partner that you control and meets your every need is really frightening,” said Tara Hunter, the acting CEO for Full Stop Australia, which supports victims of domestic or family violence. “Given what we know already that the drivers of gender-based violence are those ingrained cultural beliefs that men can control women, that is really problematic.”

Dr Belinda Barnet, a senior lecturer in media at Swinburne University, said the apps cater to a need, but, as with much AI, it will depend on what rules guide the system and how it is trained.

“It’s completely unknown what the effects are,” Barnet said. “With respect to relationship apps and AI, you can see that it fits a really profound social need [but] I think we need more regulation, particularly around how these systems are trained.”

Having a relationship with an AI whose functions are set at the whim of a company also has its drawbacks. Replika’s parent company Luka Inc faced a backlash from users earlier this year when the company hastily removed erotic roleplay functions, a move which many of the company’s users found akin to gutting the Rep’s personality.

Users on the subreddit compared the change to the grief felt at the death of a friend. The moderator on the subreddit noted users were feeling “anger, grief, anxiety, despair, depression, [and] sadness” at the news.

The company ultimately restored the erotic roleplay functionality for users who had registered before the policy change date.

Rob Brooks, an academic at the University of New South Wales, noted at the time the episode was a warning for regulators of the real impact of the technology.

“Even if these technologies are not yet as good as the ‘real thing’ of human-to-human relationships, for many people they are better than the alternative – which is nothing,” he said.


My thoughts: Experts worry that these apps could promote unhealthy expectations for human relationships, as users may come to expect their partners to be perfectly compliant and controllable. Additionally, there is concern that these apps could reinforce harmful gender stereotypes and contribute to violence against women.

The potential risks of AI girlfriend apps are still unknown, and more research is needed to understand their impact on human relationships. However, it is important to be aware of the potential risks and potential harm of these apps and to regulate them accordingly.

Sunday, January 24, 2021

Trust does not need to be human: it is possible to trust medical AI

Ferrario A, Loi M, ViganĂ² E.
Journal of Medical Ethics 
Published Online First: 25 November 2020. 
doi: 10.1136/medethics-2020-106922

Abstract

In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust describes human–human interactions. To do so, we consider an account of trust that distinguishes trust from reliance in a way that is compatible with trusting non-human agents. In this account, to trust a medical AI is to rely on it with little monitoring and control of the elements that make it trustworthy. This attitude does not imply specific properties in the AI system that in fact only humans can have. This account of trust is applicable, in particular, to all cases where a physician relies on the medical AI predictions to support his or her decision making.

Here is an excerpt:

Let us clarify our position with an example. Medical AIs support decision making by the provision of predictions, often in the form of machine learning model outcomes, to identify and plan better prognoses, diagnoses and treatments.3 These outcomes are the result of complex computational processes on high-dimensional data that are difficult to understand by physicians. Therefore, it may be convenient to look at the medical AI as a ‘black box’, or an input–output system whose internal mechanisms are not directly accessible or understandable. Through a sufficient number of interactions with the medical AI, its developers and AI-savvy colleagues, and by analysing different types of outputs (eg, those of young patients or multimorbid ones), the physician may develop a mental model, that is, a set of beliefs, on the performance and error patterns of the AI. We describe this phase in the relation between the physician and the AI as the ‘mere reliance’ phase, which does not need to involve trust (or at best involves very little trust).