Riva, P., Aureli, N., & Silvestrini, F.
(2022). Acta Psychologica, 229, 103681.
Abstract
The spread of artificial intelligence (AI) technologies in ever-widening domains (e.g., virtual assistants) increases the chances of daily interactions between humans and AI. But can non-human agents influence human beings and perhaps even surpass the power of the influence of another human being? This research investigated whether people faced with different tasks (objective vs. subjective) could be more influenced by the information provided by another human being or an AI. We expected greater AI (vs. other humans) influence in objective tasks (i.e., based on a count and only one possible correct answer). By contrast, we expected greater human (vs. AI) influence in subjective tasks (based on attributing meaning to evocative images). In Study 1, participants (N = 156) completed a series of trials of an objective task to provide numerical estimates of the number of white dots pictured on black backgrounds. Results showed that participants conformed more with the AI's responses than the human ones. In Study 2, participants (N = 102) in a series of subjective tasks observed evocative images associated with two concepts ostensibly provided, again, by an AI or a human. Then, they rated how each concept described the images appropriately. Unlike the objective task, in the subjective one, participants conformed more with the human than the AI's responses. Overall, our findings show that under some circumstances, AI can influence people above and beyond the influence of other humans, offering new insights into social influence processes in the digital era.
Conclusion
Our research might offer new insights into social influence processes in the digital era. The results showed that people can conform more to non-human agents (than human ones) in a digital context under specific circumstances. For objective tasks eliciting uncertainty, people might be more prone to conform to AI agents than another human being, whereas for subjective tasks, other humans may continue to be the most credible source of influence compared with AI agents. These findings highlight the relevance of matching agents and the type of task to maximize social influence. Our findings could be important for non-human agent developers, showing under which circumstances a human is more prone to follow the guidance of non-human agents. Proposing a non-human agent in a task in which it is not so trusted could be suboptimal. Conversely, in objective-type tasks that elicit uncertainty, it might be advantageous to emphasize the nature of the agent as artificial intelligence, rather than trying to disguise the agent as human (as some existing chatbots tend to do). In conclusion, it is important to consider, on the one hand, that non-human agents can become credible sources of social influence and, on the other hand, the match between the type of agent and the type of task.
Summary:
The first study found that people conformed more to AI than to human sources on objective tasks, such as estimating the number of white dots on a black background. The second study found that people conformed more to human than to AI sources on subjective tasks, such as attributing meaning to evocative images.
The authors conclude that the findings of their studies suggest that AI can be a powerful source of social influence, especially on objective tasks. However, they also note that the literature on AI and social influence is still limited, and more research is needed to understand the conditions under which AI can be more or less influential than human sources.
Key points:
- The spread of AI technologies is increasing the chances of daily interactions between humans and AI.
- Research has shown that people can be influenced by AI on objective tasks, but they may be more influenced by humans on subjective tasks.
- More research is needed to understand the conditions under which AI can be more or less influential than human sources.