Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Turing Test. Show all posts
Showing posts with label Turing Test. Show all posts

Monday, June 10, 2024

Attributions toward artificial agents in a modified Moral Turing Test

Aharoni, E., Fernandes, S., Brady, D.J. et al.
Sci Rep 14, 8458 (2024).

Abstract

Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24–28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source. Remarkably, they rated the AI’s moral reasoning as superior in quality to humans’ along almost all dimensions, including virtuousness, intelligence, and trustworthiness, consistent with passing what Allen and colleagues call the comparative MTT. Next, when tasked with identifying the source of each evaluation (human or computer), people performed significantly above chance levels. Although the AI did not pass this test, this was not because of its inferior moral reasoning but, potentially, its perceived superiority, among other possible explanations. The emergence of language models capable of producing moral responses perceived as superior in quality to humans’ raises concerns that people may uncritically accept potentially harmful moral guidance from AI. This possibility highlights the need for safeguards around generative language models in matters of morality.

Here is my summary:

The researchers conducted a modified Moral Turing Test (m-MTT) to investigate if people view moral evaluations by advanced AI systems similarly to those by humans. They had participants rate the quality of moral reasoning from the AI language model GPT-4 and from humans, while initially blinded to the source.

Key Findings
  • Remarkably, participants rated GPT-4's moral reasoning as superior in quality to humans' across dimensions like virtuousness, intelligence, and trustworthiness. This is consistent with passing the "comparative MTT" proposed previously.
  • When later asked to identify if the moral evaluations came from a human or computer, participants performed above chance levels.
  • However, GPT-4 did not definitively "pass" this test, potentially because its perceived superiority made it identifiable as AI.

Thursday, October 14, 2021

A Minimal Turing Test

McCoy, J. P., and Ullman, T.D.
Journal of Experimental Social Psychology
Volume 79, November 2018, Pages 1-8

Abstract

We introduce the Minimal Turing Test, an experimental paradigm for studying perceptions and meta-perceptions of different social groups or kinds of agents, in which participants must use a single word to convince a judge of their identity. We illustrate the paradigm by having participants act as contestants or judges in a Minimal Turing Test in which contestants must convince a judge they are a human, rather than an artificial intelligence. We embed the production data from such a large-scale Minimal Turing Test in a semantic vector space, and construct an ordering over pairwise evaluations from judges. This allows us to identify the semantic structure in the words that people give, and to obtain quantitative measures of the importance that people place on different attributes. Ratings from independent coders of the production data provide additional evidence for the agency and experience dimensions discovered in previous work on mind perception. We use the theory of Rational Speech Acts as a framework for interpreting the behavior of contestants and judges in the Minimal Turing Test.


Wednesday, October 31, 2018

We’re Worrying About the Wrong Kind of AI

Mark Buchanan
Bloomberg.com
Originally posted June 11, 2018

No computer has yet shown features of true human-level artificial intelligence much less conscious awareness. Some experts think we won't see it for a long time to come. And yet academics, ethicists, developers and policy-makers are already thinking a lot about the day when computers become conscious; not to mention worries about more primitive AI being used in defense projects.

Now consider that biologists have been learning to grow functioning “mini brains” or “brain organoids” from real human cells, and progress has been so fast that researchers are actually worrying about what to do if a piece of tissue in a lab dish suddenly shows signs of having conscious states or reasoning abilities. While we are busy focusing on computer intelligence, AI may arrive in living form first, and bring with it a host of unprecedented ethical challenges.

In the 1930s, the British mathematician Alan Turing famously set out the mathematical foundations for digital computing. It's less well known that Turing later pioneered the mathematical theory of morphogenesis, or how organisms develop from single cells into complex multicellular beings through a sequence of controlled transformations making increasingly intricate structures. Morphogenesis is also a computation, only with a genetic program controlling not just 0s and 1s, but complex chemistry, physics and cellular geometry.

Following Turing's thinking, biologists have learned to control the computation of biological development so accurately that lab growth of artificial organs, even brains, is no longer science fiction.

The information is here.

Tuesday, June 12, 2018

Did Google Duplex just pass the Turing Test?

Lance Ulanoff
Medium.com
Originally published

Here is an excerpt:

In short, this means that while Duplex has your hair and dining-out options covered, it could stumble in movie reservations and negotiations with your cable provider.

Even so, Duplex fooled two humans. I heard no hesitation or confusion. In the hair salon call, there was no indication that the salon worker thought something was amiss. She wanted to help this young woman make an appointment. What will she think when she learns she was duped by Duplex?

Obviously, Duplex’s conversations were also short, each lasting less than a minute, putting them well-short of the Turing Test benchmark. I would’ve enjoyed hearing the conversations devolve as they extended a few minutes or more.

I’m sure Duplex will soon tackle more domains and longer conversations, and it will someday pass the Turing Test.

It’s only a matter of time before Duplex is handling other mundane or difficult calls for us, like calling our parents with our own voices (see Wavenet technology). Eventually, we’ll have our Duplex voices call each other, handling pleasantries and making plans, which Google Assistant can then drop in our Google Calendar.

The information is here.

Monday, June 4, 2018

Human-sounding Google Assistant sparks ethics questions

The Strait Times
Originally published May 9, 2018

Here are some excerpts:

The new Google digital assistant converses so naturally it may seem like a real person.

The unveiling of the natural-sounding robo-assistant by the tech giant this week wowed some observers, but left others fretting over the ethics of how the human-seeming software might be used.

(cut)

The Duplex demonstration was quickly followed by debate over whether people answering phones should be told when they are speaking to human-sounding software and how the technology might be abused in the form of more convincing "robocalls" by marketers or political campaigns.

(cut)

Digital assistants making arrangements for people also raises the question of who is responsible for mistakes, such as a no-show or cancellation fee for an appointment set for the wrong time.

The information is here.

Friday, October 17, 2014

The intuitional problem of consciousness

By Mark O'Brien
Scientia Salon
Originally posted September 1, 2014

Here is an excerpt:

It would seem instead that consciousness must be a property of some kind. It is certainly true that physical properties are not usually exhibited by simulations. A simulation of a waterfall is not wet, a simulation of a fire is not hot, and a virtual black hole is not going to spaghettify [5] you any time soon. However I think that there are some properties which are not physical in this way, and these may be preserved in virtualisation. Orderliness, complexity, elegance and even intelligent, intentional behavior can be just as evident in simulations as they are in physical things. I propose that such properties be called abstract properties.

The entire article is here.