Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Misinformation. Show all posts
Showing posts with label Misinformation. Show all posts

Tuesday, February 1, 2022

Network Structure Impacts the Synchronization of Collective Beliefs

Vlasceanu, M., Morais, M. J., & Coman, A. 
(2021). Journal of Cognition and Culture.

Abstract

People’s beliefs are influenced by interactions within their communities. The propagation of this influence through conversational social networks should impact the degree to which community members synchronize their beliefs. To investigate, we recruited a sample of 140 participants and constructed fourteen 10-member communities. Participants first rated the accuracy of a set of statements (pre-test) and were then provided with relevant evidence about them. Then, participants discussed the statements in a series of conversational interactions, following pre-determined network structures (clustered/non-clustered). Finally, they rated the accuracy of the statements again (post- test). The results show that belief synchronization, measuring the increase in belief similarity among individuals within a community from pre-test to post-test, is influenced by the community’s conversational network structure. This synchronization is circumscribed by a degree of separation effect and is equivalent in the clustered and non- clustered networks. We also find that conversational content predicts belief change from pre-test to post-test.

From the Discussion

Understanding the mechanisms by which collective beliefs take shape and change over time is essential from a theoretical perspective (Vlasceanu, Enz, Coman, 2018), but perhaps even more urgent from an applied point of view.  This urgency is fueled by recent findings showing that false news diffuse farther, faster, deeper, and more broadly than true ones in social networks (Vosoughi, Roy, Aral, 2018), and that news can determine what people discuss and even change their beliefs (King, Schneer, White, 2017). And given that beliefs influence people’s behaviors (Shariff & Rhemtulla, 2012; Mangels, Butterfield, Lamb, Good, Dweck, 2006; Ajzen, 1991; Hochbaum, 1958), understanding the dynamics of collective belief formation is of vital social importance as they have the potential to affect some of the most impending threats our society is facing from pandemics (Pennycook, McPhetres, Zhang, Rand, 2020) to climate change (Benegal & Scruggs, 2018). Thus, policy makers could use such findings in designing misinformation reduction campaigns targeting communities (Dovidio & Esses, 2007; Lewandowsky et al., 2012). For instance, these findings suggest such campaigns be sensitive of the conversational network structures of their targeted communities. Knowing how members of these communities are connected, and leveraging the finding that people synchronize their beliefs mainly with individuals they are directly connected to, could inform intervention designers how communities with different connectivity structures might respond to their efforts. For example, when targeting a highly interconnected group, intervention designers could expect that administering the intervention to a few well-connected individuals will have a strong impact at the community level. In contrast, when targeting a less interconnected group, intervention designers could administer the intervention to more central individuals for a comparable effect. 

Thursday, January 13, 2022

Beyond Populism: The Psychology of Status-Seeking and Extreme Political Discontent

Petersen, M., Osmundsen, M., & Bor, A. 
(2020, July 8).
https://doi.org/10.31234/osf.io/puqzs

Abstract

Modern democracies are currently experiencing destabilizing events including the emergence of demagogic leaders, the onset of street riots, circulation of misinformation and extremely hostile political engagements on social media. Some of the forms of discontent are commonly argued to be related to populism. In this chapter, however, we argue that the evolved psychology of status-seeking lies at the core of this syndrome of extreme political discontent. Thus, social status constitutes one of the key adaptive resources for any human, as it induces deference from others in conflicts of interest. Prior research has identified two routes to status: Privilege acquired through service and dominance acquired through coercion. We argue that extreme political discontent involves behaviors aimed at dominance through engagement in either individual aggression or in mobilization processes that facilitate coalitional aggression. Consistent with this, we empirically demonstrate that measures of status-seeking via dominance correlate with indices of a large number of extreme forms of political discontent and do so more strongly than a measure of populism. Finally, we argue that the reason why dominance strategies become activated in the context of modern democratic politics is that increased inequality activates heightened needs for status and, under such conditions, dominance for some groups constitutes a more attainable route to status than prestige.

Towards depolarized societies 

Understanding the psychological and structural roots of extreme discontent is key if we are to move  towards more peaceful societies.  An exclusive focus on populism might lead to the expectation that the  roots of discontent are value-based. For example, the rise of right-wing populism may suggest that  frustrations are rooted in a decreasing respect for authorities and traditional forms of life. If that was indeed the case, a depolarized society might be reached only if non-populists were willing to compromise on important political values and to a larger extent embrace tradition and authority.

In contrast, the present arguments and results suggest that the true roots of the most extreme forms of discontent are less based on a conflict of abstract political values and more on a lack of social status and recognition. If so, the path towards depolarization lies in more inclusion and more equality, for example, based on an affirmation of the classical liberal doctrine of the importance of  open,  non-dominant  exchange  of  arguments  (Popper,  1945).  Unfortunately,  this  is  not something  that can be fixed quickly, as would be  the case  if discontent was rooted  in transient factors such as the behavior of social media algorithms.  Rather, depolarization requires difficult structural changes that alleviates the onset of dominance motivations.

Wednesday, January 12, 2022

Hidden wisdom or pseudo-profound bullshit? The effect of speaker admirability

Kara-Yakoubian, et al.
(2021, October 28).
https://doi.org/10.31234/osf.io/tpnkw

Abstract

How do people reason in response to ambiguous messages shared by admirable individuals? Using behavioral markers and self-report questionnaires, in two experiments (N = 571) we examined the influence of speakers’ admirability on meaning-seeking and wise reasoning in response to pseudo-profound bullshit. In both studies, statements that sounded superficially impressive but lacked intent to communicate meaning generated meaning-seeking, but only when delivered by high admirability speakers (e.g., the Dalai Lama) as compared to low admirability speakers (e.g., Kim Kardashian). The effect of speakers’ admirability on meaning-seeking was unique to pseudo-profound bullshit statements and was absent for mundane (Study 1) and motivational (Study 2) statements. In Study 2, participants also engaged in wiser reasoning for pseudo-profound bullshit (vs. motivational) statements and did more so when speakers were high in admirability. These effects occurred independently of the amount of time spent on statements or the complexity of participants’ reflections. It appears that pseudo-profound bullshit can promote epistemic reflection and certain aspects of wisdom, when associated with an admirable speaker.

From the General Discussion

Pseudo-profound language represents a type of misinformation (Čavojová et al., 2019b; Littrell et al., 2021; Pennycook & Rand, 2019a) where ambiguity reigns. Our findings suggest that source admirability could play an important role in the cognitive processing of ambiguous misinformation, including fake news (Pennycook & Rand, 2020) and euphemistic language (Walker et al., 2021). For instance, in the case of fake news, people may be more inclined to engage in epistemic reflection if the source of an article is highly admirable. However, we also observed that statements from high (vs. low) admirability sources were judged as more profound and were better liked. Extended to misinformation, a combination of greater perceived profundity, liking, and acquired meaning could potentially facilitate the sharing of ambiguous fake news content throughout social networks. Increased reflective thinking (as measured by the CRT) has also been linked to greater discernment on social media, with individuals who score higher on the CRT being less likely to believe fake news stories and share this type of content (Mosleh et al., 2021; Pennycook & Rand, 2019a). Perhaps, people might engage in more epistemic reflection if the source of an article is highly admirable, which may in turn predict a decrease in the sharing behaviour of fake news. Similarly, people may be more inclined to engage in epistemic reflection for euphemistic language, such as the term “enhanced interrogation” used in replacement of “torture,” and conclude that this type of language means something other than what it refers to, if used by a more admirable (compared to a less admirable) individual.

Monday, December 20, 2021

Parents protesting 'critical race theory' identify another target: Mental health programs

Tyler Kingkade and Mike Hixenbaugh
NBC News
Originally posted 15 NOV 21

At a September school board meeting in Southlake, Texas, a parent named Tara Eddins strode to the lectern during the public comment period and demanded to know why the Carroll Independent School District was paying counselors “at $90K a pop” to give students lessons on suicide prevention.

“At Carroll ISD, you are actually advertising suicide,” Eddins said, arguing that many parents in the affluent suburban school system have hired tutors because the district’s counselors are too focused on mental health instead of helping students prepare for college.

(cut)

In Carmel, Indiana, activists swarmed school board meetings this fall to demand that a district fire its mental health coordinator from what they said was a “dangerous, worthless” job. And in Fairfax County, Virginia, a national activist group condemned school officials for sending a survey to students that included questions like “During the past week, how often did you feel sad?”

Many of the school programs under attack fall under the umbrella of social emotional learning, or SEL, a teaching philosophy popularized in recent years that aims to help children manage their feelings and show empathy for others. Conservative groups argue that social emotional learning has become a “Trojan horse” for critical race theory, a separate academic concept that examines how systemic racism is embedded in society. They point to SEL lessons that encourage children to celebrate diversity, sometimes introducing students to conversations about race, gender and sexuality.

Activists have accused school districts of using the programs to ask children invasive questions — about their feelings, sexuality and the way race shapes their lives — as part of a ploy to “brainwash” them with liberal values and to trample parents’ rights. Groups across the country recently started circulating forms to get parents to opt their children out of surveys designed to measure whether students are struggling with their emotions or being bullied, describing the efforts as “data mining” and an invasion of privacy.

Thursday, July 15, 2021

Overconfidence in news judgments is associated with false news susceptibility

B. A. Lyons, et al.
PNAS, Jun 2021, 118 (23) e2019527118
DOI: 10.1073/pnas.2019527118

Abstract

We examine the role of overconfidence in news judgment using two large nationally representative survey samples. First, we show that three in four Americans overestimate their relative ability to distinguish between legitimate and false news headlines; respondents place themselves 22 percentiles higher than warranted on average. This overconfidence is, in turn, correlated with consequential differences in real-world beliefs and behavior. We show that overconfident individuals are more likely to visit untrustworthy websites in behavioral data; to fail to successfully distinguish between true and false claims about current events in survey questions; and to report greater willingness to like or share false content on social media, especially when it is politically congenial. In all, these results paint a worrying picture: The individuals who are least equipped to identify false news content are also the least aware of their own limitations and, therefore, more susceptible to believing it and spreading it further.

Significance

Although Americans believe the confusion caused by false news is extensive, relatively few indicate having seen or shared it—a discrepancy suggesting that members of the public may not only have a hard time identifying false news but fail to recognize their own deficiencies at doing so. If people incorrectly see themselves as highly skilled at identifying false news, they may unwittingly participate in its circulation. In this large-scale study, we show that not only is overconfidence extensive, but it is also linked to both self-reported and behavioral measures of false news website visits, engagement, and belief. Our results suggest that overconfidence may be a crucial factor for explaining how false and low-quality information spreads via social media.

Friday, May 14, 2021

The Internet as Cognitive Enhancement

Voinea, C., Vică, C., Mihailov, E. et al. 
Sci Eng Ethics 26, 2345–2362 (2020). 
https://doi.org/10.1007/s11948-020-00210-8

Abstract

The Internet has been identified in human enhancement scholarship as a powerful cognitive enhancement technology. It offers instant access to almost any type of information, along with the ability to share that information with others. The aim of this paper is to critically assess the enhancement potential of the Internet. We argue that unconditional access to information does not lead to cognitive enhancement. The Internet is not a simple, uniform technology, either in its composition, or in its use. We will look into why the Internet as an informational resource currently fails to enhance cognition. We analyze some of the phenomena that emerge from vast, continual fluxes of information–information overload, misinformation and persuasive design—and show how they could negatively impact users’ cognition. Methods for mitigating these negative impacts are then advanced: individual empowerment, better collaborative systems for sorting and categorizing information, and the use of artificial intelligence assistants that could guide users through the informational space of today’s Internet.

Conclusions

Although the Internet is one of the main drivers of change and evolution, its capacity to radically transform human cognition is exaggerated. No doubt this technology has improved numerous areas of our lives by facilitating access to and exchange of knowledge. However, its cognitive enhancement potential is not as clear as originally assumed. Too much information, misinformation, and the exploitation of users’ attention through persuasive design, could result in a serious decrease of users’ cognitive performance. The Internet is also an environment where users’ cognitive capacities are put under stress and their biases exploited.

Monday, February 22, 2021

Anger Increases Susceptibility to Misinformation

Greenstein M, Franklin N. 
Exp Psychol. 2020 May;67(3):202-209. 

Abstract

The effect of anger on acceptance of false details was examined using a three-phase misinformation paradigm. Participants viewed an event, were presented with schema-consistent and schema-irrelevant misinformation about it, and were given a surprise source monitoring test to examine the acceptance of the suggested material. Between each phase of the experiment, they performed a task that either induced anger or maintained a neutral mood. Participants showed greater susceptibility to schema-consistent than schema-irrelevant misinformation. Anger did not affect either recognition or source accuracy for true details about the initial event, but suggestibility for false details increased with anger. In spite of this increase in source errors (i.e., misinformation acceptance), both confidence in the accuracy of source attributions and decision speed for incorrect judgments also increased with anger. Implications are discussed with respect to both the general effects of anger and real-world applications such as eyewitness memory.

Friday, February 19, 2021

The Cognitive Science of Fake News

Pennycook, G., & Rand, D. G. 
(2020, November 18). 

Abstract

We synthesize a burgeoning literature investigating why people believe and share “fake news” and other misinformation online. Surprisingly, the evidence contradicts a common narrative whereby partisanship and politically motivated reasoning explain failures to discern truth from falsehood. Instead, poor truth discernment is linked to a lack of careful reasoning and relevant knowledge, and to the use of familiarity and other heuristics. Furthermore, there is a substantial disconnect between what people believe and what they will share on social media. This dissociation is largely driven by inattention, rather than purposeful sharing of misinformation. As a result, effective interventions can nudge social media users to think about accuracy, and can leverage crowdsourced veracity ratings to improve social media ranking algorithms.

From the Discussion

Indeed, recent research shows that a simple accuracy nudge intervention –specifically, having participants rate the accuracy of a single politically neutral headline (ostensibly as part of a pretest) prior to making judgments about social media sharing –improves the extent to which people discern between true and false news content when deciding what to share online in survey experiments. This approach has also been successfully deployed in a large-scale field experiment on Twitter, in which messages asking users to rate the accuracy of a random headline were sent to thousands of accounts who recently shared links to misinformation sites. This subtle nudge significantly increased the quality of the content they subsequently shared; see Figure3B. Furthermore, survey experiments have shown that asking participants to explain how they know if a headline is true of false before sharing it increases sharing discernment, and having participants rate accuracy at the time of encoding protects against familiarity effects."

Saturday, January 30, 2021

Scientific communication in a post-truth society

S. Iyengar & D. S. Massey
PNAS Apr 2019, 116 (16) 7656-7661

Abstract

Within the scientific community, much attention has focused on improving communications between scientists, policy makers, and the public. To date, efforts have centered on improving the content, accessibility, and delivery of scientific communications. Here we argue that in the current political and media environment faulty communication is no longer the core of the problem. Distrust in the scientific enterprise and misperceptions of scientific knowledge increasingly stem less from problems of communication and more from the widespread dissemination of misleading and biased information. We describe the profound structural shifts in the media environment that have occurred in recent decades and their connection to public policy decisions and technological changes. We explain how these shifts have enabled unscrupulous actors with ulterior motives increasingly to circulate fake news, misinformation, and disinformation with the help of trolls, bots, and respondent-driven algorithms. We document the high degree of partisan animosity, implicit ideological bias, political polarization, and politically motivated reasoning that now prevail in the public sphere and offer an actual example of how clearly stated scientific conclusions can be systematically perverted in the media through an internet-based campaign of disinformation and misinformation. We suggest that, in addition to attending to the clarity of their communications, scientists must also develop online strategies to counteract campaigns of misinformation and disinformation that will inevitably follow the release of findings threatening to partisans on either end of the political spectrum.

(cut)

At this point, probably the best that can be done is for scientists and their scientific associations to anticipate campaigns of misinformation and disinformation and to proactively develop online strategies and internet platforms to counteract them when they occur. For example, the National Academies of Science, Engineering, and Medicine could form a consortium of professional scientific organizations to fund the creation of a media and internet operation that monitors networks, channels, and web platforms known to spread false and misleading scientific information so as to be able to respond quickly with a countervailing campaign of rebuttal based on accurate information through Facebook, Twitter, and other forms of social media.

Friday, July 10, 2020

Aging in an Era of Fake News

Brashier, N. M., & Schacter, D. L. (2020).
Current Directions in 
Psychological Science, 29(3), 316–323.

Abstract

Misinformation causes serious harm, from sowing doubt in modern medicine to inciting violence. Older adults are especially susceptible—they shared the most fake news during the 2016 U.S. election. The most intuitive explanation for this pattern lays the blame on cognitive deficits. Although older adults forget where they learned information, fluency remains intact, and knowledge accumulated across decades helps them evaluate claims. Thus, cognitive declines cannot fully explain older adults’ engagement with fake news. Late adulthood also involves social changes, including greater trust, difficulty detecting lies, and less emphasis on accuracy when communicating. In addition, older adults are relative newcomers to social media and may struggle to spot sponsored content or manipulated images. In a post-truth world, interventions should account for older adults’ shifting social goals and gaps in their digital literacy.

(cut)

The focus on “facts” at the expense of long-term trust is one reason why I see news organizations being ineffective in preventing, and in some cases facilitating, the establishment of “alternative narratives”. News reporting, as with any other type of declaration, can be ideologically, politically, and emotionally contested. The key differences in the current environment involve speed and transparency: First, people need to be exposed to the facts before the narrative can be strategically distorted through social media, distracting “leaks”, troll operations, and meme warfare. Second, while technological solutions for “fake news” are a valid effort, platforms policing content through opaque technologies adds yet another disruption in the layer of trust that should be reestablished directly between news organizations and their audiences.

A pdf can be found here.

Friday, May 8, 2020

Social-media companies must flatten the curve of misinformation

Joan Donovan
nature.com
Originally posted 14 April 20

Here is an excerpt:

After blanket coverage of the distortion of the 2016 US election, the role of algorithms in fanning the rise of the far right in the United States and United Kingdom, and of the antivax movement, tech companies have announced policies against misinformation. But they have slacked off on building the infrastructure to do commercial-content moderation and, despite the hype, artificial intelligence is not sophisticated enough to moderate social-media posts without human supervision. Tech companies acknowledge that groups, such as The Internet Research Agency and Cambridge Analytica, used their platforms for large-scale operations to influence elections within and across borders. At the same time, these companies have balked at removing misinformation, which they say is too difficult to identify reliably.

Moderating content after something goes wrong is too late. Preventing misinformation requires curating knowledge and prioritizing science, especially during a public crisis. In my experience, tech companies prefer to downplay the influence of their platforms, rather than to make sure that influence is understood. Proper curation requires these corporations to engage independent researchers, both to identify potential manipulation and to provide context for ‘authoritative content’.

Early this April, I attended a virtual meeting hosted by the World Health Organization, which had convened journalists, medical researchers, social scientists, tech companies and government representatives to discuss health misinformation. This cross-sector collaboration is a promising and necessary start. As I listened, though, I could not help but to feel teleported back to 2017, when independent researchers first began uncovering the data trails of the Russian influence operations. Back then, tech companies were dismissive. If we can take on health misinformation collaboratively now, then we will have a model for future efforts.

The info is here.

Repetition increases Perceived Truth even for Known Falsehoods

Lisa Fazio
PsyArXiv
Originally posted 23 March 20
 
Abstract

Repetition increases belief in false statements. This illusory truth effect occurs with many different types of statements (e.g., trivia facts, news headlines, advertisements), and even occurs when the false statement contradicts participants’ prior knowledge. However, existing studies of the effect of prior knowledge on the illusory truth effect share a common flaw; they measure participants’ knowledge after the experimental manipulation and thus conditionalize responses on posttreatment variables. In the current study, we measure prior knowledge prior to the experimental manipulation and thus provide a cleaner measurement of the causal effect of repetition on belief. We again find that prior knowledge does not protect against the illusory truth effect. Repeated false statements were given higher truth ratings than novel statements, even when they contradicted participants’ prior knowledge.

From the Discussion

As in previous research (Brashier et al., 2017; Fazio et al., 2015), prior knowledge did not protect participants from the illusory truth effect.Repeated falsehoods were rated as being more true than novel falsehoods, even when they both contradicted participants’ prior knowledge. By measuring prior knowledge before the experimental session, this study avoids conditioning on posttreatment variables and provides cleaner evidence for the effect (Montgomery et al., 2018). Whether prior knowledge is measured before or after the manipulation, it is clear that repetition increases belief in falsehoods that contradict existing knowledge.

The research is here.

Tuesday, January 28, 2020

Why Misinformation Is About Who You Trust, Not What You Think

Brian Gallagher and Kevin Berger
Nautil.us
Originally published 14 Feb 19

Here is an excerpt:

When it comes to misinformation, twas always thus. What’s changed now?

O’Connor: It’s always been the case that humans have been dependent on social ties to gain knowledge and belief. There’s been misinformation and propaganda for hundreds of years. If you’re a governing body, you have interests you’re trying to protect. You want to control what people believe. What’s changed is social media and the structure of communication between people. Now people have tremendous ability to shape who they interact with. Say you’re an anti-vaxxer. You find people online who are also anti-vaxxers and communicate with them rather than people who challenge your beliefs.

The other important thing is that this new structure means that all sorts of influencers—the Russian government, various industry groups, other government groups—have direct access to people. They can communicate with people in a much more personal way. They can pose on Twitter and Facebook as a normal person who you might want to interact with. If you look at Facebook in the lead up to the 2016 election, the Russian Internet Research Agency created animal-lovers groups, Black Lives Matter groups, gun-rights groups, and anti-immigrant groups. They could build trust with people who would naturally be part of these groups. And once they grounded that trust, they could influence them by getting them not to vote or by driving polarization, causing more extreme rhetoric. They can make other people trust them in ways that would have been very difficult without social media.

Weatherall: People tend to trust their friends, their family, people who they share other affinities with. So if the message can look like it’s coming from those people, it can be very effective. Another thing that’s become widespread is the ability to produce easily shareable visual media. The memes we see on Twitter or on Facebook don’t really say anything, they conjure up an emotion—an emotion associated with an ideology or belief you might have. It’s a type of misinformation that supports your beliefs without ever coming out and saying something false or saying anything.

The interview is here.

Thursday, December 5, 2019

How Misinformation Spreads--and Why We Trust It

Cailin O'Connor and James Owen Weatherall
Scientific American
Originally posted September 2019

Here is an excerpt:

Many communication theorists and social scientists have tried to understand how false beliefs persist by modeling the spread of ideas as a contagion. Employing mathematical models involves simulating a simplified representation of human social interactions using a computer algorithm and then studying these simulations to learn something about the real world. In a contagion model, ideas are like viruses that go from mind to mind.

You start with a network, which consists of nodes, representing individuals, and edges, which represent social connections.  You seed an idea in one “mind” and see how it spreads under various assumptions about when transmission will occur.

Contagion models are extremely simple but have been used to explain surprising patterns of behavior, such as the epidemic of suicide that reportedly swept through Europe after publication of Goethe's The Sorrows of Young Werther in 1774 or when dozens of U.S. textile workers in 1962 reported suffering from nausea and numbness after being bitten by an imaginary insect. They can also explain how some false beliefs propagate on the Internet.

Before the last U.S. presidential election, an image of a young Donald Trump appeared on Facebook. It included a quote, attributed to a 1998 interview in People magazine, saying that if Trump ever ran for president, it would be as a Republican because the party is made up of “the dumbest group of voters.” Although it is unclear who “patient zero” was, we know that this meme passed rapidly from profile to profile.

The meme's veracity was quickly evaluated and debunked. The fact-checking Web site Snopes reported that the quote was fabricated as early as October 2015. But as with the tomato hornworm, these efforts to disseminate truth did not change how the rumors spread. One copy of the meme alone was shared more than half a million times. As new individuals shared it over the next several years, their false beliefs infected friends who observed the meme, and they, in turn, passed the false belief on to new areas of the network.

This is why many widely shared memes seem to be immune to fact-checking and debunking. Each person who shared the Trump meme simply trusted the friend who had shared it rather than checking for themselves.

Putting the facts out there does not help if no one bothers to look them up. It might seem like the problem here is laziness or gullibility—and thus that the solution is merely more education or better critical thinking skills. But that is not entirely right.

Sometimes false beliefs persist and spread even in communities where everyone works very hard to learn the truth by gathering and sharing evidence. In these cases, the problem is not unthinking trust. It goes far deeper than that.

The info is here.

Thursday, October 24, 2019

Facebook isn’t free speech, it’s algorithmic amplification optimized for outrage

Jon Evans
techcrunch.com
Originally published October 20, 2019

This week Mark Zuckerberg gave a speech in which he extolled “giving everyone a voice” and fighting “to uphold a wide a definition of freedom of expression as possible.” That sounds great, of course! Freedom of expression is a cornerstone, if not the cornerstone, of liberal democracy. Who could be opposed to that?

The problem is that Facebook doesn’t offer free speech; it offers free amplification. No one would much care about anything you posted to Facebook, no matter how false or hateful, if people had to navigate to your particular page to read your rantings, as in the very early days of the site.

But what people actually read on Facebook is what’s in their News Feed … and its contents, in turn, are determined not by giving everyone an equal voice, and not by a strict chronological timeline. What you read on Facebook is determined entirely by Facebook’s algorithm, which elides much — censors much, if you wrongly think the News Feed is free speech — and amplifies little.

What is amplified? Two forms of content. For native content, the algorithm optimizes for engagement. This in turn means people spend more time on Facebook, and therefore more time in the company of that other form of content which is amplified: paid advertising.

Of course this isn’t absolute. As Zuckerberg notes in his speech, Facebook works to stop things like hoaxes and medical misinformation from going viral, even if they’re otherwise anointed by the algorithm. But he has specifically decided that Facebook will not attempt to stop paid political misinformation from going viral.

The info is here.

Editor's note: Facebook is one of the most defective products that millions of Americans use everyday.

Thursday, June 27, 2019

This doctor is recruiting an army of medical experts to drown out fake health news on Instagram and Twitter

Christine Farr
CNBC.com
Originally published June 2, 2019

The antidote to fake health news? According to Austin Chiang, the first chief medical social media officer at a top hospital, it’s to drown out untrustworthy content with tweets, pics and posts from medical experts that the average American can relate to.

Chiang is a Harvard-trained gastroenterologist with a side passion for social media. On Instagram, where he refers to himself as a “GI Doctor,” he has 20,000 followers, making him one of the most influential docs aside from TV personalities, plastic surgeons and New York’s so-called “most eligible bachelor,” Dr. Mike.

Every few days, he’ll share a selfie or a photo of himself in scrubs along with captions about the latest research or insights from conferences he attends, or advice to patients trying to sort our real information from rumors. He’s also active on Twitter, Microsoft’s LinkedIn and Facebook (which owns Instagram).

But Chiang recognizes that his following pales in comparison to accounts like “Medical Medium,” where two million people tune in to the musings of a psychic, who raves about vegetables that will cure diseases ranging from depression to diabetes. (Gwyneth Paltrow’s Goop has written about the account’s creator glowingly.) Or on Pinterest and Facebook, where anti-vaccination content has been far more prominent than legitimate public health information. Meanwhile, on e-commerce sites like Amazon and eBay, vendors have hawked unproven and dangerous health “cures, ” including an industrial-strength bleach that is billed as eliminating autism in children.

The info is here.

Friday, December 14, 2018

Don’t Want to Fall for Fake News? Don’t Be Lazy

Robbie Gonzalez
www.wired.com
Originally posted November 9, 2018

Here are two excerpts:

Misinformation researchers have proposed two competing hypotheses for why people fall for fake news on social media. The popular assumption—supported by research on apathy over climate change and the denial of its existence—is that people are blinded by partisanship, and will leverage their critical-thinking skills to ram the square pegs of misinformation into the round holes of their particular ideologies. According to this theory, fake news doesn't so much evade critical thinking as weaponize it, preying on partiality to produce a feedback loop in which people become worse and worse at detecting misinformation.

The other hypothesis is that reasoning and critical thinking are, in fact, what enable people to distinguish truth from falsehood, no matter where they fall on the political spectrum. (If this sounds less like a hypothesis and more like the definitions of reasoning and critical thinking, that's because they are.)

(cut)

All of which suggests susceptibility to fake news is driven more by lazy thinking than by partisan bias. Which on one hand sounds—let's be honest—pretty bad. But it also implies that getting people to be more discerning isn't a lost cause. Changing people's ideologies, which are closely bound to their sense of identity and self, is notoriously difficult. Getting people to think more critically about what they're reading could be a lot easier, by comparison.

Then again, maybe not. "I think social media makes it particularly hard, because a lot of the features of social media are designed to encourage non-rational thinking." Rand says. Anyone who has sat and stared vacantly at their phone while thumb-thumb-thumbing to refresh their Twitter feed, or closed out of Instagram only to re-open it reflexively, has experienced firsthand what it means to browse in such a brain-dead, ouroboric state. Default settings like push notifications, autoplaying videos, algorithmic news feeds—they all cater to humans' inclination to consume things passively instead of actively, to be swept up by momentum rather than resist it.

The info is here.

Why Health Professionals Should Speak Out Against False Beliefs on the Internet

Joel T. Wu and Jennifer B. McCormick
AMA J Ethics. 2018;20(11):E1052-1058.
doi: 10.1001/amajethics.2018.1052.

Abstract

Broad dissemination and consumption of false or misleading health information, amplified by the internet, poses risks to public health and problems for both the health care enterprise and the government. In this article, we review government power for, and constitutional limits on, regulating health-related speech, particularly on the internet. We suggest that government regulation can only partially address false or misleading health information dissemination. Drawing on the American Medical Association’s Code of Medical Ethics, we argue that health care professionals have responsibilities to convey truthful information to patients, peers, and communities. Finally, we suggest that all health care professionals have essential roles in helping patients and fellow citizens obtain reliable, evidence-based health information.

Here is an excerpt:

We would suggest that health care professionals have an ethical obligation to correct false or misleading health information, share truthful health information, and direct people to reliable sources of health information within their communities and spheres of influence. After all, health and well-being are values shared by almost everyone. Principle V of the AMA Principles of Ethics states: “A physician shall continue to study, apply, and advance scientific knowledge, maintain a commitment to medical education, make relevant information available to patients, colleagues, and the public, obtain consultation, and use the talents of other health professionals when indicated” (italics added). And Principle VII states: “A physician shall recognize a responsibility to participate in activities contributing to the improvement of the community and the betterment of public health” (italics added). Taken together, these principles articulate an ethical obligation to make relevant information available to the public to improve community and public health. In the modern information age, wherein the unconstrained and largely unregulated proliferation of false health information is enabled by the internet and medical knowledge is no longer privileged, these 2 principles have a special weight and relevance.

Monday, November 5, 2018

Bolton says 'excessive' ethics checks discourage outsiders from joining government

Nicole Gaouette
CNN.com
Originally posted October 31, 2018

A day after CNN reported that the Justice Department is investigating whether Interior Secretary Ryan Zinke has broken the law by using his office to personally enrich himself, national security adviser John Bolton told the Hamilton Society in Washington that ethics rules make it hard for people outside of the government to serve.

Bolton said "things have gotten more bureaucratic, harder to get things done" since he served under President George H.W. Bush in the 1990s and blamed the difficulty, in part, on the "excessive nature of the so-called ethics checks."

"If you were designing a system to discourage people from coming into government, you would do it this way," Bolton said.

"That risks building up a priestly class" of government employees, he added.

"It's really depressing to see," Bolton said of the bureaucratic red tape.

The info is here.

My take: Mr. Bolton is wrong.  We need rigorous ethical guidelines, transparency, enforceability, and thorough background checks.  Otherwise, the swamp will grow much greater than it already is.

Tuesday, March 13, 2018

Cognitive Ability and Vulnerability to Fake News

David Z. Hambrick and Madeline Marquardt
Scientific American
Originally posted on February 6, 2018

“Fake news” is Donald Trump’s favorite catchphrase. Since the election, it has appeared in some 180 tweets by the President, decrying everything from accusations of sexual assault against him to the Russian collusion investigation to reports that he watches up to eight hours of television a day. Trump may just use “fake news” as a rhetorical device to discredit stories he doesn’t like, but there is evidence that real fake news is a serious problem. As one alarming example, an analysis by the internet media company Buzzfeed revealed that during the final three months of the 2016 U.S. presidential campaign, the 20 most popular false election stories generated around 1.3 million more Facebook engagements—shares, reactions, and comments—than did the 20 most popular legitimate stories. The most popular fake story was “Pope Francis Shocks World, Endorses Donald Trump for President.”

Fake news can distort people’s beliefs even after being debunked. For example, repeated over and over, a story such as the one about the Pope endorsing Trump can create a glow around a political candidate that persists long after the story is exposed as fake. A study recently published in the journal Intelligence suggests that some people may have an especially difficult time rejecting misinformation.

The article is here.