Joan Donovan
nature.com
Originally posted 14 April 20
Here is an excerpt:
After blanket coverage of the distortion of the 2016 US election, the role of algorithms in fanning the rise of the far right in the United States and United Kingdom, and of the antivax movement, tech companies have announced policies against misinformation. But they have slacked off on building the infrastructure to do commercial-content moderation and, despite the hype, artificial intelligence is not sophisticated enough to moderate social-media posts without human supervision. Tech companies acknowledge that groups, such as The Internet Research Agency and Cambridge Analytica, used their platforms for large-scale operations to influence elections within and across borders. At the same time, these companies have balked at removing misinformation, which they say is too difficult to identify reliably.
Moderating content after something goes wrong is too late. Preventing misinformation requires curating knowledge and prioritizing science, especially during a public crisis. In my experience, tech companies prefer to downplay the influence of their platforms, rather than to make sure that influence is understood. Proper curation requires these corporations to engage independent researchers, both to identify potential manipulation and to provide context for ‘authoritative content’.
Early this April, I attended a virtual meeting hosted by the World Health Organization, which had convened journalists, medical researchers, social scientists, tech companies and government representatives to discuss health misinformation. This cross-sector collaboration is a promising and necessary start. As I listened, though, I could not help but to feel teleported back to 2017, when independent researchers first began uncovering the data trails of the Russian influence operations. Back then, tech companies were dismissive. If we can take on health misinformation collaboratively now, then we will have a model for future efforts.
The info is here.