Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, November 27, 2024

Deepfake detection with and without content warnings

Lewis, A., Vu, P., Duch, R. M., & Chowdhury, A.
(2023). Royal Society Open Science, 10(11).

Abstract

The rapid advancement of ‘deepfake' video technology—which uses deep learning artificial intelligence algorithms to create fake videos that look real—has given urgency to the question of how policymakers and technology companies should moderate inauthentic content. We conduct an experiment to measure people's alertness to and ability to detect a high-quality deepfake among a set of videos. First, we find that in a natural setting with no content warnings, individuals who are exposed to a deepfake video of neutral content are no more likely to detect anything out of the ordinary (32.9%) compared to a control group who viewed only authentic videos (34.1%). Second, we find that when individuals are given a warning that at least one video in a set of five is a deepfake, only 21.6% of respondents correctly identify the deepfake as the only inauthentic video, while the remainder erroneously select at least one genuine video as a deepfake.

Here are some thoughts: 

The rise of deepfake technology introduces significant challenges for psychologists, particularly in the areas of trust, perception, and digital identity. As deepfakes become increasingly sophisticated and hard to detect, they may foster a general skepticism toward digital media, including online therapy platforms and digital content. This skepticism could affect the therapeutic alliance, as clients might become more wary of the reliability and authenticity of online interactions. For therapists who conduct virtual sessions or share therapeutic resources online, this growing distrust of digital content could impact clients’ willingness to engage fully, potentially compromising therapeutic outcomes.

Another key concern is the vulnerability to misinformation that deepfakes introduce. These realistic, fabricated videos can be used to create misleading or harmful content, which may distress clients or influence their beliefs and behaviors. For clients already struggling with anxiety, paranoia, or trauma, the presence of undetectable deepfakes in the media landscape could intensify symptoms, making it more difficult for them to feel safe and secure. Therapists must be prepared to help clients navigate these feelings, addressing the psychological effects of a world where truth can be distorted at will and guiding clients toward healthier media consumption habits.

Deepfake technology also threatens personal identity and privacy, presenting unique risks for both clients and therapists. The potential for therapists or clients to be misrepresented in fabricated media could lead to boundary issues or mistrust within the therapeutic relationship. If deepfake content were to circulate, it might appear credible to clients or even influence their perception of reality. This may create a barrier in therapy if clients experience confusion or fear regarding digital identity and privacy, as well as complicate therapists' ability to establish and maintain boundaries online.

The psychological implications of deepfakes also prompt ethical considerations for psychologists. As trusted mental health professionals, psychologists may find themselves addressing client concerns about digital literacy and emotional stability amid a fast-evolving digital environment. The ability to understand and anticipate the effects of deepfake technology could become an essential component of ethical and professional responsibility in therapy. As the digital world becomes more complex, therapists are positioned to help clients navigate these new challenges with discernment, promoting psychological resilience and healthy media habits within the therapeutic context.