Lin, Z. (2025b).
PubMed, 645(8080), 285.
Artificial intelligence (AI) systems are consuming vast amounts of online content yet pointing few users to the articles’ publishers. In early 2025, US-based company OpenAI collected around 250 pages of material for every visitor it directed to a publisher’s website. By mid-2025, that figure had soared to 1,500, according to Matthew Prince, chief executive of US-based Internet-security firm Cloudflare. And the extraction rate of US-based AI start-up company Anthropic climbed even higher over the same period: from 6,000 pages to 60,000. Even tech giant Google, long considered an asset to publishers because of the referral traffic it generated, tripled its ratio from 6 pages to 18 with the launch of its AI Overviews feature. The current information ecosystem is dominated by ‘answer engines’ — AI chatbots that synthesize and deliver information directly, with users trusting the answers now more than ever.
As a researcher in metascience and psychology, I see this transition as the most important change in knowledge discovery in a generation. Although these tools can answer questions faster and often more accurately than search engines can, this efficiency has a price. In addition to the decimation of web traffic to publishers, there is a more insidious cost. Not AI’s ‘hallucinations’ — fabrications that can be corrected — but the biases and vulnerabilities in the real information that these systems present to users.
Here are some thoughts:
Psychologists should be deeply concerned about the rise of AI "answer engines" (like chatbots and AI Overviews) that now dominate information discovery, as they are fundamentally altering how we find and consume knowledge—often without directing users to original sources. This shift isn't just reducing traffic to publishers; it's silently distorting the scientific record itself. AI systems, trained on existing online content, amplify entrenched biases: they over-represent research from scholars with names classified as white and under-represent those classified as Asian, mirroring and exacerbating societal inequities in academia. Crucially, they massively inflate the Matthew Effect, disproportionately recommending the most-cited papers (over 60% of suggestions fall in the top 1%), drowning out novel, lesser-known work that might challenge prevailing paradigms. While researchers focus on AI-generated text hallucinations or ethical writing, the far more insidious threat lies in AI’s silent curation of which literature we see, which methods we consider relevant, and which researchers we cite—potentially narrowing scientific inquiry and entrenching systemic biases at a foundational level. The field urgently needs research into AI-assisted information retrieval and policies addressing this hidden bias in knowledge discovery, not just in content generation.