Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Simulation. Show all posts
Showing posts with label Simulation. Show all posts

Thursday, March 21, 2024

AI-synthesized faces are indistinguishable from real faces and more trustworthy

Nightingale, S. J., & Farid, H. (2022).
PNAS of the USA, 119(8).

Abstract

Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud, and disinformation campaigns. Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces.

Here is part of the Discussion section

Synthetically generated faces are not just highly photorealistic, they are nearly indistinguishable from real faces and are judged more trustworthy. This hyperphotorealism is consistent with recent findings. These two studies did not contain the same diversity of race and gender as ours, nor did they match the real and synthetic faces as we did to minimize the chance of inadvertent cues. While it is less surprising that White male faces are highly realistic—because these faces dominate the neural network training—we find that the realism of synthetic faces extends across race and gender. Perhaps most interestingly, we find that synthetically generated faces are more trustworthy than real faces. This may be because synthesized faces tend to look more like average faces which themselves are deemed more trustworthy. Regardless of the underlying reason, synthetically generated faces have emerged on the other side of the uncanny valley. This should be considered a success for the fields of computer graphics and vision. At the same time, easy access (https://thispersondoesnotexist.com) to such high-quality fake imagery has led and will continue to lead to various problems, including more convincing online fake profiles and—as synthetic audio and video generation continues to improve—problems of nonconsensual intimate imagery, fraud, and disinformation campaigns, with serious implications for individuals, societies, and democracies.

We, therefore, encourage those developing these technologies to consider whether the associated risks are greater than their benefits. If so, then we discourage the development of technology simply because it is possible. If not, then we encourage the parallel development of reasonable safeguards to help mitigate the inevitable harms from the resulting synthetic media. Safeguards could include, for example, incorporating robust watermarks into the image and video synthesis networks that would provide a downstream mechanism for reliable identification. Because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.

Here are some important points:

This research raises concerns about the potential for misuse of AI-generated faces in areas like deepfakes and disinformation campaigns.

It also opens up interesting questions about how we perceive trust and authenticity in our increasingly digital world.

Tuesday, May 30, 2023

Are We Ready for AI to Raise the Dead?

Jack Holmes
Esquire Magazine
Originally posted 4 May 24

Here is an excerpt:

You can see wonderful possibilities here. Some might find comfort in hearing their mom’s voice, particularly if she sounds like she really sounded and gives the kind of advice she really gave. But Sandel told me that when he presents the choice to students in his ethics classes, the reaction is split, even as he asks in two different ways. First, he asks whether they’d be interested in the chatbot if their loved one bequeathed it to them upon their death. Then he asks if they’d be interested in building a model of themselves to bequeath to others. Oh, and what if a chatbot is built without input from the person getting resurrected? The notion that someone chose to be represented posthumously in a digital avatar seems important, but even then, what if the model makes mistakes? What if it misrepresents—slanders, even—the dead?

Soon enough, these questions won’t be theoretical, and there is no broad agreement about whom—or even what—to ask. We’re approaching a more fundamental ethical quandary than we often hear about in discussions around AI: human bias embedded in algorithms, privacy and surveillance concerns, mis- and disinformation, cheating and plagiarism, the displacement of jobs, deepfakes. These issues are really all interconnected—Osama bot Laden might make the real guy seem kinda reasonable or just preach jihad to tweens—and they all need to be confronted. We think a lot about the mundane (kids cheating in AP History) and the extreme (some advanced AI extinguishing the human race), but we’re more likely to careen through the messy corridor in between. We need to think about what’s allowed and how we’ll decide.

(cut)

Our governing troubles are compounded by the fact that, while a few firms are leading the way on building these unprecedented machines, the technology will soon become diffuse. More of the codebase for these models is likely to become publicly available, enabling highly talented computer scientists to build their own in the garage. (Some folks at Stanford have already built a ChatGPT imitator for around $600.) What happens when some entrepreneurial types construct a model of a dead person without the family’s permission? (We got something of a preview in April when a German tabloid ran an AI-generated interview with ex–Formula 1 driver Michael Schumacher, who suffered a traumatic brain injury in 2013. His family threatened to sue.) What if it’s an inaccurate portrayal or it suffers from what computer scientists call “hallucinations,” when chatbots spit out wildly false things? We’ve already got revenge porn. What if an old enemy constructs a false version of your dead wife out of spite? “There’s an important tension between open access and safety concerns,” Reich says. “Nuclear fusion has enormous upside potential,” too, he adds, but in some cases, open access to the flesh and bones of AI models could be like “inviting people around the world to play with plutonium.”


Yes, there was a Black Mirror episode (Be Right Back) about this issue.  The wiki is here.

Tuesday, December 10, 2019

AI Deemed 'Too Dangerous To Release' Makes It Out Into The World

Andrew Griffin
independent.co.uk
Originally posted November 8, 2019

Here is an excerpt:

"Due to our concerns about malicious applications of the technology, we are not releasing the trained model," wrote OpenAI in a February blog post, released when it made the announcement. "As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."

At that time, the organisation released only a very limited version of the tool, which used 124 million parameters. It has released more complex versions ever since, and has now made the full version available.

The full version is more convincing than the smaller one, but only "marginally". The relatively limited increase in credibility was part of what encouraged the researchers to make it available, they said.

It hopes that the release can partly help the public understand how such a tool could be misused, and help inform discussions among experts about how that danger can be mitigated.

In February, researchers said that there was a variety of ways that malicious people could misuse the programme. The outputted text could be used to create misleading news articles, impersonate other people, automatically create abusive or fake content for social media or to use to spam people with – along with a variety of possible uses that might not even have been imagined yet, they noted.

Such misuses would require the public to become more critical about the text they read online, which could have been generated by artificial intelligence, they said.

The info is here.

Friday, June 1, 2018

CGI ‘Influencers’ Like Lil Miquela Are About to Flood Your Feeds

Miranda Katz
www.wired.com
Originally published May 1, 2018

Here is an excerpt:

There are already a number of startups working on commercial applications for what they call “digital” or “virtual” humans. Some, like the New Zealand-based Soul Machines, are focusing on using these virtual humans for customer service applications; already, the company has partnered with the software company Autodesk, Daimler Financial Services, and National Westminster Bank to create hyper-lifelike digital assistants. Others, like 8i and Quantum Capture, are working on creating digital humans for virtual, augmented, and mixed reality applications.

And those startups’ technologies, though still in their early stages, make Lil Miquela and her cohort look positively low-res. “[Lil Miquela] is just scratching the surface of what these virtual humans can do and can be,” says Quantum Capture CEO and president Morgan Young. “It’s pre-rendered, computer-generated snapshots—images that look great, but that’s about as far as it’s going to go, as far as I can tell, with their tech. We’re concentrating on a high level of visual quality and also on making these characters come to life.”

Quantum Capture is focused on VR and AR, but the Toronto-based company is also aware that those might see relatively slow adoption—and so it’s currently leveraging its 3D-scanning and motion-capture technologies for real-world applications today.

The information is here.

Thursday, April 19, 2018

Artificial Intelligence Is Killing the Uncanny Valley and Our Grasp on Reality

Sandra Upson
Wired.com
Originally posted February 16, 2018

Here is an excerpt:

But it’s not hard to see how this creative explosion could all go very wrong. For Yuanshun Yao, a University of Chicago graduate student, it was a fake video that set him on his recent project probing some of the dangers of machine learning. He had hit play on a recent clip of an AI-generated, very real-looking Barack Obama giving a speech, and got to thinking: Could he do a similar thing with text?

A text composition needs to be nearly perfect to deceive most readers, so he started with a forgiving target, fake online reviews for platforms like Yelp or Amazon. A review can be just a few sentences long, and readers don’t expect high-quality writing. So he and his colleagues designed a neural network that spat out Yelp-style blurbs of about five sentences each. Out came a bank of reviews that declared such things as, “Our favorite spot for sure!” and “I went with my brother and we had the vegetarian pasta and it was delicious.” He asked humans to then guess whether they were real or fake, and sure enough, the humans were often fooled.

The information is here.

Sunday, April 8, 2018

Can Bots Help Us Deal with Grief?

Evan Selinger
Medium.com
Originally posted March 13, 2018

Here are two excerpts:

Muhammad is under no illusion that he’s speaking with the dead. To the contrary, Muhammad is quick to point out the simulation he created works well when generating scripts of predictable answers, but it has difficulty relating to current events, like a presidential election. In Muhammad’s eyes, this is a feature, not a bug.

Muhammad said that “out of good conscience” he didn’t program the simulation to be surprising, because that capability would deviate too far from the goal of “personality emulation.”

This constraint fascinates me. On the one hand, we’re all creatures of habit. Without habits, people would have to deliberate before acting every single time. This isn’t practically feasible, so habits can be beneficial when they function as shortcuts that spare us from paralysis resulting from overanalysis.

(cut)

The empty chair technique that I’m referring to was popularized by Friedrich Perls (more widely known as Fritz Perls), a founder of Gestalt therapy. The basic setup looks like this: Two chairs are placed near each other; a psychotherapy patient sits in one chair and talks to the other, unoccupied chair. When talking to the empty chair, the patient engages in role-playing and acts as if a person is seated right in front of her — someone to whom she has something to say. After making a statement, launching an accusation, or asking a question, the patient then responds to herself by taking on the absent interlocutor’s perspective.

In the case of unresolved parental issues, the dialog could have the scripted format of the patient saying something to her “mother,” and then having her “mother” respond to what she said, going back and forth in a dialog until something that seems meaningful happens. The prop of an actual chair isn’t always necessary, and the context of the conversations can vary. In a bereavement context, for example, a widow might ask the chair-as-deceased-spouse for advice about what to do in a troubling situation.

The article is here.

Monday, October 3, 2016

Moral learning: Why learning? Why moral? And why now?

Peter Railton
Cognition

Abstract

What is distinctive about a bringing a learning perspective to moral psychology? Part of the answer lies in the remarkable transformations that have taken place in learning theory over the past two decades, which have revealed how powerful experience-based learning can be in the acquisition of abstract causal and evaluative representations, including generative models capable of attuning perception, cognition, affect, and action to the physical and social environment. When conjoined with developments in neuroscience, these advances in learning theory permit a rethinking of fundamental questions about the acquisition of moral understanding and its role in the guidance of behavior. For example, recent research indicates that spatial learning and navigation involve the formation of non-perspectival as well as ego-centric models of the physical environment, and that spatial representations are combined with learned information about risk and reward to guide choice and potentiate further learning. Research on infants provides evidence that they form non-perspectival expected-value representations of agents and actions as well, which help them to navigate the human environment. Such representations can be formed by highly-general mental processes such as causal and empathic simulation, and thus afford a foundation for spontaneous moral learning and action that requires no innate moral faculty and can exhibit substantial autonomy with respect to community norms. If moral learning is indeed integral with the acquisition and updating of casual and evaluative models, this affords a new way of understanding well-known but seemingly puzzling patterns in intuitive moral judgment—including the notorious “trolley problems.”

The article is here.

Tuesday, August 9, 2016

Fiction: Simulation of Social Worlds

By Keith Oatley
Trends in Cognitive Science
(2016) Volume 20, Issue 8, p 618–628

Here is an excerpt:

What is the basis for effects of improved empathy and theory-of-mind with engagement in fiction? Two kinds of account are possible, process and content, and they complement each other.

One kind of process is inference: engagement in fiction may involve understanding characters by inferences of the sort we make in conversation about what people mean and what kinds of people they are. In an experiment to test this hypothesis, participants were asked to read Alice Munro's The Office, a first-person short story about a woman who rents an office in which to write. In one condition, the story starts in Munro's words, which include ‘But here comes the disclosure which is not easy for me. I am a writer. That does not sound right. Too presumptuous, phony, or at least unconvincing’. In a comparison version, the story starts with readers being told directly what the narrator feels: ‘I’m embarrassed telling people that I am a writer …’ , p. 270). People who read the version in Munro's own words had to make inferences about what kind of person the narrator was and how she felt. They attained a deeper identification and understanding of the protagonist than did those who were told directly how she felt. Engagement in fiction can be thought of as practice in inference making of this kind.

A second kind of process is transportation: the extent to which people become emotionally involved, immersed, or carried away imaginatively in a story. The more transportation that occurred in reading a story, the greater the story-consistent emotional experience has been found to be. Emotion in fiction is important because, as in life, it can signal what is significant in the relation between events and our concerns [42]. In an experiment on empathetic effects, the more readers were transported into a fictional story, the greater were found to be both their empathy and their likelihood of responding on a behavioral measure: helping someone who had dropped some pencils on the floor. The vividness of imagery during reading has been found to improve transportation and to increase empathy. To investigate such imagery, participants in a functional magnetic resonance imaging (fMRI) machine were asked to imagine a scene when given between three and six spoken phrases, for instance, ‘a dark blue carpet’ … ‘a carved chest of drawers’ … ‘an orange striped pencil’. Three phrases were enough to activate the hippocampus to its largest extent and for participants to imagine a scene with maximum vividness. In another study, one group of participants listened to a story and rated the intensity of their emotions while reading. In a second group of participants, parts of the story that raters had found most emotional produced the largest changes in heart rate and greatest fMRI-based activations.

The article is here.

Thursday, March 13, 2014

"A New Theory of Free Will" and the Peer-to-Peer Simulation Hypothesis

By Marcus Arvan
Flickers of Freedom Blog
Originally posted February 24, 2014

Here is an excerpt:

Nick Bostrom is of course well-known for arguing, on probabilistic grounds, that we are probably living in a simulation. Somewhat similarly, David Chalmers has argued that we should consider the “simulation hypothesis” not as a skeptical hypothesis that threatens our having knowledge of the external world, but rather as a metaphysical hypothesis regarding what our world is made of. Finally, the simulation hypothesis is gaining some traction in physics.

My 2013 article and subsequent unpublished work go several steps further, arguing that a new form of the simulation hypothesis -- what I call the Peer-to-Peer (P2P) Simulation Hypothesis -- is not only implied by several serious hypotheses in philosophy and physics, but that it also provides a unified explanation of (A) the mind-body problem, (B) the problem of free will, and (C) several fundamental features of quantum mechanics, while (D) providing a new solution to the problem of free will that I call "Libertarian Compatibilism."

The entire article is here.

Editor's note: I am not sure if I really understand the entire concept.  I am considering a podcast to help understand his theory.

Thursday, March 28, 2013

Bringing a Virtual Brain to Life

By Tim Requarth
The New York Times
Originally published March 18, 2013

Here are some excerpts:

In 2009, Dr. Markram conceived of the Human Brain Project, a sprawling and controversial initiative of more than 150 institutions around the world that he hopes will bring scientists together to realize his dream.
      
In January, the European Union raised the stakes by awarding the project a 10-year grant of up to $1.3 billion — an unheard-of sum in neuroscience.
      
“A meticulous virtual copy of the human brain,” Dr. Markram wrote in Scientific American, “would enable basic research on brain cells and circuits or computer-based drug trials.”
      
An equally ambitious “big brain” idea is in the works in the United States: The Obama administration is expected to propose its own project, with up to $3 billion allocated over a decade to develop technologies to track the electrical activity of every neuron in the brain.
      
But just as many obstacles stand in the way of the American project, a number of scientists have expressed serious reservations about Dr. Markram’s project.
      
Some say we don’t know enough about the brain to simulate it on a supercomputer. And even if we did, these critics ask, what would be the value of building such a complicated “virtual brain”?

(cut)

“It’s not like the Human Genome Project, where you just have to read out a few billion base pairs and you’re done,” said Peter Dayan, a neuroscientist at University College London. “For the human brain, what would you need to know to build a simulation? That’s a huge research question, and it has to do with what’s important to know about the brain.”
      
And Haim Sompolinsky, a neuroscientist at the Hebrew University of Jerusalem, said: “The rhetoric is that in a decade they will be able to reverse-engineer the human brain in computers. This is fantasy. Nothing will come close to it in a decade.”