Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label instructions. Show all posts
Showing posts with label instructions. Show all posts

Saturday, August 20, 2022

Truth by Repetition … without repetition: Testing the effect of instructed repetition on truth judgments

Mattavelli, S., Corneille, O., & Unkelbach, C.
Journal of Experimental Psychology
Learning Memory and Cognition
June 2022

Abstract

Past research indicates that people judge repeated statements as more true than new ones. An experiential consequence of repetition that may underly this “truth effect” is processing fluency: processing statements feels easier following their repetition. In three preregistered experiments (N=684), we examined the effect of merely instructed repetition (i.e., not experienced) on truth judgments. Experiments 1-2 instructed participants that some statements were present (vs. absent) in an exposure phase allegedly undergone by other individuals. We then asked them to rate such statements based on how they thought those individuals would have done. Overall, participants rated repeated statements as more true than new statements. The instruction-based repetition effects were significant but also significantly weaker than those elicited by the experience of repetition (Experiments 1 & 2). Additionally, Experiment 2 clarified that adding a repetition status tag in the experienced repetition condition did not impact truth judgments. Experiment 3 further showed that the instruction-based effect was still detectable when participants provided truth judgments for themselves rather than estimating other people’s judgments. We discuss the mechanisms that can explain these effects and their implications for advancing our understanding of the truth effect.

(Beginning of the) General Discussion 

Deciding whether information is true or false is a challenging task. Extensive research showed that one key variable that people often use to judge the truth of a statement is repetition (e.g., Hasher et al. 1977): repeated statements are judged more true than new ones (see Dechêne et al., 2010). Virtually all explanations of this truth effect refer to the processing consequences of repetition: higher recognition rates than new statements, higher familiarity, and higher fluency (see Unkelbach et al., 2019). However, in many communication situations, people get to know that a statement is repeated (e.g., it occurred frequently) without prior exposure to the statement. Here, we asked whether repetition can be used as a cue for truth without prior exposure, and thus, in the absence of experiential consequences of repetition such as fluency. 

Conclusion 

This work represents the first attempt to assess the impact of instructed repetition on truth judgments. We found that the truth effect was stronger when repetition was experienced rather than merely instructed in three experiments. However, we provided initial evidence that a component of the effect is unrelated to the experience of repetition. A truth effect was still detectable in the absence of any internal cue (i.e., fluency) induced by the experienced repetition of the statement and, therefore, should be conditional upon learning history or naïve beliefs. This finding paves the way for new research avenues interested in isolating the unique contribution of known repetition and experienced fluency on truth judgments.


This research has multiple applications to psychotherapy, including how do patients know what information about self and others is true, and how much is due to repetition or internal cues, beliefs, or feelings.  Human beings are meaning makers, and try to assess how the world functions based on the meaning projected toward others.

Friday, July 8, 2022

AI bias can arise from annotation instructions

K. Wiggers & D. Coldeway
TechCrunch
Originally posted 8 MAY 22

Here is an excerpt:

As it turns out, annotators’ predispositions might not be solely to blame for the presence of bias in training labels. In a preprint study out of Arizona State University and the Allen Institute for AI, researchers investigated whether a source of bias might lie in the instructions written by dataset creators to serve as guides for annotators. Such instructions typically include a short description of the task (e.g., “Label all birds in these photos”) along with several examples.

The researchers looked at 14 different “benchmark” datasets used to measure the performance of natural language processing systems, or AI systems that can classify, summarize, translate and otherwise analyze or manipulate text. In studying the task instructions provided to annotators that worked on the datasets, they found evidence that the instructions influenced the annotators to follow specific patterns, which then propagated to the datasets. For example, over half of the annotations in Quoref, a dataset designed to test the ability of AI systems to understand when two or more expressions refer to the same person (or thing), start with the phrase “What is the name,” a phrase present in a third of the instructions for the dataset.

The phenomenon, which the researchers call “instruction bias,” is particularly troubling because it suggests that systems trained on biased instruction/annotation data might not perform as well as initially thought. Indeed, the co-authors found that instruction bias overestimates the performance of systems and that these systems often fail to generalize beyond instruction patterns.

The silver lining is that large systems, like OpenAI’s GPT-3, were found to be generally less sensitive to instruction bias. But the research serves as a reminder that AI systems, like people, are susceptible to developing biases from sources that aren’t always obvious. The intractable challenge is discovering these sources and mitigating the downstream impact.