Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Virtual Reality. Show all posts
Showing posts with label Virtual Reality. Show all posts

Monday, August 14, 2023

Artificial intelligence, superefficiency and the end of work: a humanistic perspective on meaning in life

Knell, S., & RĂ¼ther, M. (2023). 
AI and Ethics.

Abstract

How would it be assessed from an ethical point of view if human wage work were replaced by artificially intelligent systems (AI) in the course of an automation process? An answer to this question has been discussed above all under the aspects of individual well-being and social justice. Although these perspectives are important, in this article, we approach the question from a different perspective: that of leading a meaningful life, as understood in analytical ethics on the basis of the so-called meaning-in-life debate. Our thesis here is that a life without wage work loses specific sources of meaning, but can still be sufficiently meaningful in certain other ways. Our starting point is John Danaher’s claim that ubiquitous automation inevitably leads to an achievement gap. Although we share this diagnosis, we reject his provocative solution according to which game-like virtual realities could be an adequate substitute source of meaning. Subsequently, we outline our own systematic alternative which we regard as a decidedly humanistic perspective. It focuses both on different kinds of social work and on rather passive forms of being related to meaningful contents. Finally, we go into the limits and unresolved points of our argumentation as part of an outlook, but we also try to defend its fundamental persuasiveness against a potential objection.

Concluding remarks

In this article, we explored the question of how we can find meaning in a post-work world. Our answer relies on a critique of John Danaher’s utopia of games and tries to stick to the humanistic idea, namely to the idea that we do not have to alter our human lifeform in an extensive way and also can keep up our orientation towards common ideals, such as working towards the good, the true and the beautiful.

Our proposal still has some shortcomings, which include the following two that we cannot deal with extensively but at least want to briefly comment on. First, we assumed that certain professional fields, especially in the meaning conferring area of the good, cannot be automated, so that the possibility of mini-jobs in these areas can be considered.  This assumption is based on a substantial thesis from the
philosophy of mind, namely that AI systems cannot develop consciousness and consequently also no genuine empathy.  This assumption needs to be further elaborated, especially in view of some forecasts that even the altruistic and philanthropic professions are not immune to the automation of superefficient systems. Second, we have adopted without further critical discussion the premise of the hybrid standard model of a meaningful life according to which meaning conferring objective value is to be found in the three spheres of the true, the good, and the beautiful. We take this premise to be intuitively appealing, but a further elaboration of our argumentation would have to try to figure out, whether this trias is really exhaustive, and if so, due to which underlying more general principle.


Full transparency, big John Danaher fan.  Regardless, here is my summary:

Humans are meaning makers. We find meaning in our work, our relationships, and our engagement with the world. The article discusses the potential impact of AI on the meaning of work, and I agree that the authors make some good points. However, I think their solution is somewhat idealistic. It is true that social relationships and engagement with the world can provide us with meaning, but these activities will be difficult to achieve, especially in a world where AI is doing most of the work.  We will need ways to cooperate, achieve, and interact to engage in behaviors that are geared toward super-ordinate goals.  Humans need to align their lives with core human principles, such as meaning-making, pattern repetition, cooperation, and values-based behaviors.
  • The authors focus on the potential impact of AI on the meaning of work, but they also acknowledge that other factors, such as automation and globalization, are also having an impact.
  • The authors' solution is based on the idea that meaning comes from relationships and engagement with the world. However, there are other theories about the meaning of life, such as the idea that meaning comes from self-actualization or from religious faith.
  • The authors acknowledge that their solution is not perfect, but they argue that it is a better alternative than Danaher's solution. However, I think it is important to consider all of the options before deciding which one is best.  Ultimately, it will come down to a values-based decision, as there seems to be no one right or correct solution.

Friday, June 1, 2018

CGI ‘Influencers’ Like Lil Miquela Are About to Flood Your Feeds

Miranda Katz
www.wired.com
Originally published May 1, 2018

Here is an excerpt:

There are already a number of startups working on commercial applications for what they call “digital” or “virtual” humans. Some, like the New Zealand-based Soul Machines, are focusing on using these virtual humans for customer service applications; already, the company has partnered with the software company Autodesk, Daimler Financial Services, and National Westminster Bank to create hyper-lifelike digital assistants. Others, like 8i and Quantum Capture, are working on creating digital humans for virtual, augmented, and mixed reality applications.

And those startups’ technologies, though still in their early stages, make Lil Miquela and her cohort look positively low-res. “[Lil Miquela] is just scratching the surface of what these virtual humans can do and can be,” says Quantum Capture CEO and president Morgan Young. “It’s pre-rendered, computer-generated snapshots—images that look great, but that’s about as far as it’s going to go, as far as I can tell, with their tech. We’re concentrating on a high level of visual quality and also on making these characters come to life.”

Quantum Capture is focused on VR and AR, but the Toronto-based company is also aware that those might see relatively slow adoption—and so it’s currently leveraging its 3D-scanning and motion-capture technologies for real-world applications today.

The information is here.

Tuesday, March 22, 2016

We're Already Violating Virtual Reality's First Code of Ethics

By Daniel Oberhaus
Motherboard.com
Originally published March 6, 2016

Here is an excerpt:

Indeed, it was in light of this potential for lasting psychological impact during and after a virtual reality experience that Madary and Metzinger drafted a list of six main recommendations for the ethical future of commercial and research virtual reality applications. Broadly summarized, their recommendations are:

1) In keeping with the American Psychological Association’s principle of non-maleficence, experiments using virtual reality should ensure that they do not cause lasting or serious harm to the subject.

2) Subjects participating in experiments using virtual reality should be informed about the lasting and serious behavioral effects resulting from virtual reality experiences, and that the extent of this behavioral influence might not be known.

3) Researchers and media outlets should avoid over-hyping the benefits of virtual reality, especially when virtual reality is being discussed as a medical treatment.

4) Awareness of the problem of dual use, or using a technology for something other than its original intention, in the context of virtual reality. The author’s particularly are wary of military applications for virtual reality (which are already being put to a lot of use), whether this means its use as a novel torture device or a means of decreasing a soldier’s empathy for the enemy.

The article is here.

Tuesday, June 16, 2015

Affective basis of judgment-behavior discrepancy in virtual experiences of moral dilemmas

I. Patil, C. Cogoni, N. Zangrando, L. Chittaro, and G. Silani
Social Neuroscience, 2014
Vol. 9, No. 1, 94-107

Abstract

Although research in moral psychology in the last decade has relied heavily on hypothetical moral dilemmas and has been effective in understanding moral judgment, how these judgments translate into behaviors remains a largely unexplored issue due to the harmful nature of the acts involved. To study this link, we follow a new approach based on a desktop virtual reality environment. In our within-subjects experiment, participants exhibited an order-dependent judgment-behavior discrepancy across temporally separated sessions, with many of them behaving in utilitarian manner in virtual reality dilemmas despite their nonutilitarian judgments for the same dilemmas in textual descriptions. This change in decisions reflected in the autonomic arousal of participants, with dilemmas in virtual reality being perceived more emotionally arousing than the ones in text, after controlling for general differences between the two presentation modalities (virtual reality vs. text). This suggests that moral decision-making in hypothetical moral dilemmas is susceptible to contextual saliency of the presentation of these dilemmas.

The entire article is here.

Wednesday, May 14, 2014

The Ethics of Virtual Rape

By John Danaher
Philosophical Disquisitions
Originally published April 26, 2014

The notorious 1982 video game Custer’s Revenge requires the player to direct their crudely pixellated character (General Custer) to avoid attacks so that he can rape a Native American woman who is tied to a stake. The game, unsurprisingly, generated a great deal of controversy and criticism at the time of its release. Since then, video games with similarly problematic content, but far more realistic imagery, have been released. For example, in 2006 the Japanese company Illusion released the game RapeLay, in which the player stalks and rapes a mother and her two daughters.

The question I want to explore in this post is the morality of such representations. One could, of course, argue that they are extrinsically wrong, i.e. that they give rise to behaviour that is morally problematic and so should limited or prohibited for that reason. This is like the typical “violent video games cause real violence”-claim, and I suspect it would be equally hard to prove in practice. The more interesting question is whether there is something intrinsically wrong with playing (and perhaps enjoying) such video games. Prima facie, the answer would seem to be “no”, since no one is actually harmed or wronged in the virtual act. But maybe there is more to it than this?

The entire article is here.

Wednesday, February 12, 2014

Environmental Psychology Matters

Annual Review of Psychology
Vol. 65: 541-579 (Volume publication date January 2014)
First published online as a Review in Advance on September 11, 2013
DOI: 10.1146/annurev-psych-010213-115048

Abstract

Environmental psychology examines transactions between individuals and their built and natural environments. This includes investigating behaviors that inhibit or foster sustainable, climate-healthy, and nature-enhancing choices, the antecedents and correlates of those behaviors, and interventions to increase proenvironmental behavior. It also includes transactions in which nature provides restoration or inflicts stress, and transactions that are more mutual, such as the development of place attachment and identity and the impacts on and from important physical settings such as home, workplaces, schools, and public spaces. As people spend more time in virtual environments, online transactions are coming under increasing research attention. Every aspect of human existence occurs in one environment or another, and the transactions with and within them have important consequences both for people and their natural and built worlds. Environmental psychology matters.

The entire review article is here.

Tuesday, February 4, 2014

Virtual Reality Moral Dilemmas Show Just How Utilitarian We Really Are

Science Daily
Originally published January 15, 2014

"Moral" psychology has traditionally been studied by subjecting individuals to moral dilemmas, that is, hypothetical choices regarding typically dangerous scenarios, but it has rarely been validated "in the field." This limitation may have led to systematic bias in hypotheses regarding the cognitive bases of moral judgements. A study relying on virtual reality has demonstrated that, in real situations, we might be far more "utilitarian" than believed so far.

The entire article is here.