Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Fake News. Show all posts
Showing posts with label Fake News. Show all posts

Thursday, March 21, 2024

AI-synthesized faces are indistinguishable from real faces and more trustworthy

Nightingale, S. J., & Farid, H. (2022).
PNAS of the USA, 119(8).

Abstract

Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud, and disinformation campaigns. Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces.

Here is part of the Discussion section

Synthetically generated faces are not just highly photorealistic, they are nearly indistinguishable from real faces and are judged more trustworthy. This hyperphotorealism is consistent with recent findings. These two studies did not contain the same diversity of race and gender as ours, nor did they match the real and synthetic faces as we did to minimize the chance of inadvertent cues. While it is less surprising that White male faces are highly realistic—because these faces dominate the neural network training—we find that the realism of synthetic faces extends across race and gender. Perhaps most interestingly, we find that synthetically generated faces are more trustworthy than real faces. This may be because synthesized faces tend to look more like average faces which themselves are deemed more trustworthy. Regardless of the underlying reason, synthetically generated faces have emerged on the other side of the uncanny valley. This should be considered a success for the fields of computer graphics and vision. At the same time, easy access (https://thispersondoesnotexist.com) to such high-quality fake imagery has led and will continue to lead to various problems, including more convincing online fake profiles and—as synthetic audio and video generation continues to improve—problems of nonconsensual intimate imagery, fraud, and disinformation campaigns, with serious implications for individuals, societies, and democracies.

We, therefore, encourage those developing these technologies to consider whether the associated risks are greater than their benefits. If so, then we discourage the development of technology simply because it is possible. If not, then we encourage the parallel development of reasonable safeguards to help mitigate the inevitable harms from the resulting synthetic media. Safeguards could include, for example, incorporating robust watermarks into the image and video synthesis networks that would provide a downstream mechanism for reliable identification. Because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.

Here are some important points:

This research raises concerns about the potential for misuse of AI-generated faces in areas like deepfakes and disinformation campaigns.

It also opens up interesting questions about how we perceive trust and authenticity in our increasingly digital world.

Wednesday, November 1, 2023

People believe misinformation is a threat because they assume others are gullible

Altay, S., & Acerbi, A. (2023).
New Media & Society, 0(0).

Abstract

Alarmist narratives about the flow of misinformation and its negative consequences have gained traction in recent years. If these fears are to some extent warranted, the scientific literature suggests that many of them are exaggerated. Why are people so worried about misinformation? In two pre-registered surveys conducted in the United Kingdom (Nstudy_1 = 300, Nstudy_2 = 300) and replicated in the United States (Nstudy_1 = 302, Nstudy_2 = 299), we investigated the psychological factors associated with perceived danger of misinformation and how it contributes to the popularity of alarmist narratives on misinformation. We find that the strongest, and most reliable, predictor of perceived danger of misinformation is the third-person effect (i.e. the perception that others are more vulnerable to misinformation than the self) and, in particular, the belief that “distant” others (as opposed to family and friends) are vulnerable to misinformation. The belief that societal problems have simple solutions and clear causes was consistently, but weakly, associated with perceived danger of online misinformation. Other factors, like negative attitudes toward new technologies and higher sensitivity to threats, were inconsistently, and weakly, associated with perceived danger of online misinformation. Finally, we found that participants who report being more worried about misinformation are more willing to like and share alarmist narratives on misinformation. Our findings suggest that fears about misinformation tap into our tendency to view other people as gullible.

My thoughts:

The authors conducted a study in the United Kingdom. They found that people who believed that others were more gullible than themselves were also more likely to perceive misinformation as a threat. This relationship was independent of other factors such as people's political beliefs, media consumption habits, and trust in institutions.

The authors argue that this finding suggests that people's concerns about misinformation may be rooted in their own biases about the intelligence and critical thinking skills of others. They also suggest that this bias may make people more likely to share and spread misinformation themselves.

The authors conclude by calling for more research on the role of bias in people's perceptions of misinformation. They also suggest that interventions to reduce misinformation should address people's biases about the gullibility of others.

One implication of this research is that people who are concerned about misinformation should be mindful of their own biases. It is important to remember that everyone is vulnerable to misinformation, regardless of their intelligence or education level. We should all be critical of the information we encounter online and be careful about sharing things that we are not sure are true.

Friday, September 15, 2023

Older Americans are more vulnerable to prior exposure effects in news evaluation.

Lyons, B. A. (2023). 
Harvard Kennedy School Misinformation Review.

Outline

Older news users may be especially vulnerable to prior exposure effects, whereby news comes to be seen as more accurate over multiple viewings. I test this in re-analyses of three two-wave, nationally representative surveys in the United States (N = 8,730) in which respondents rated a series of mainstream, hyperpartisan, and false political headlines (139,082 observations). I find that prior exposure effects increase with age—being strongest for those in the oldest cohort (60+)—especially for false news. I discuss implications for the design of media literacy programs and policies regarding targeted political advertising aimed at this group.

Essay Summary
  • I used three two-wave, nationally representative surveys in the United States (N = 8,730) in which respondents rated a series of actual mainstream, hyperpartisan, or false political headlines. Respondents saw a sample of headlines in the first wave and all headlines in the second wave, allowing me to determine if prior exposure increases perceived accuracy differentially across age.  
  • I found that the effect of prior exposure to headlines on perceived accuracy increases with age. The effect increases linearly with age, with the strongest effect for those in the oldest age cohort (60+). These age differences were most pronounced for false news.
  • These findings suggest that repeated exposure can help account for the positive relationship between age and sharing false information online. However, the size of this effect also underscores that other factors (e.g., greater motivation to derogate the out-party) may play a larger role. 
The beginning of the Implications Section

Web-tracking and social media trace data paint a concerning portrait of older news users. Older American adults were much more likely to visit dubious news sites in 2016 and 2020 (Guess, Nyhan, et al., 2020; Moore et al., 2023), and were also more likely to be classified as false news “supersharers” on Twitter, a group who shares the vast majority of dubious news on the platform (Grinberg et al., 2019). Likewise, this age group shares about seven times more links to these domains on Facebook than younger news consumers (Guess et al., 2019; Guess et al., 2021). 

Interestingly, however, older adults appear to be no worse, if not better, at identifying false news stories than younger cohorts when asked in surveys (Brashier & Schacter, 2020). Why might older adults identify false news in surveys but fall for it “in the wild?” There are likely multiple factors at play, ranging from social changes across the lifespan (Brashier & Schacter, 2020) to changing orientations to politics (Lyons et al., 2023) to cognitive declines (e.g., in memory) (Brashier & Schacter, 2020). In this paper, I focus on one potential contributor. Specifically, I tested the notion that differential effects of prior exposure to false news helps account for the disjuncture between older Americans’ performance in survey tasks and their behavior in the wild.

A large body of literature has been dedicated to exploring the magnitude and potential boundary conditions of the illusory truth effect (Hassan & Barber, 2021; Henderson et al., 2021; Pillai & Fazio, 2021)—a phenomenon in which false statements or news headlines (De keersmaecker et al., 2020; Pennycook et al., 2018) come to be believed over multiple exposures. Might this effect increase with age? As detailed by Brashier and Schacter (2020), cognitive deficits are often blamed for older news users’ behaviors. This may be because cognitive abilities are strongest in young adulthood and slowly decline beyond that point (Salthouse, 2009), resulting in increasingly effortful cognition (Hess et al., 2016). As this process unfolds, older adults may be more likely to fall back on heuristics when judging the veracity of news items (Brashier & Marsh, 2020). Repetition, the source of the illusory truth effect, is one heuristic that may be relied upon in such a scenario. This is because repeated messages feel easier to process and thus are seen as truer than unfamiliar ones (Unkelbach et al., 2019).

Monday, August 28, 2023

'You can't bullshit a bullshitter' (or can you?): Bullshitting frequency predicts receptivity to various types of misleading information

Littrell, S., Risko, E. F., & Fugelsang, J. A. (2021).
The British journal of social psychology, 60(4), 
1484–1505.

Abstract

Research into both receptivity to falling for bullshit and the propensity to produce it have recently emerged as active, independent areas of inquiry into the spread of misleading information. However, it remains unclear whether those who frequently produce bullshit are inoculated from its influence. For example, both bullshit receptivity and bullshitting frequency are negatively related to cognitive ability and aspects of analytic thinking style, suggesting that those who frequently engage in bullshitting may be more likely to fall for bullshit. However, separate research suggests that individuals who frequently engage in deception are better at detecting it, thus leading to the possibility that frequent bullshitters may be less likely to fall for bullshit. Here, we present three studies (N = 826) attempting to distinguish between these competing hypotheses, finding that frequency of persuasive bullshitting (i.e., bullshitting intended to impress or persuade others) positively predicts susceptibility to various types of misleading information and that this association is robust to individual differences in cognitive ability and analytic cognitive style.

Conclusion

Gaining a better understanding of the differing ways in which various types of misleading information are transmitted and received is becoming increasingly important in the information age (Kristansen & Kaussler, 2018). Indeed, an oft-repeated maxim in popular culture is, “you can’t bullshit a bullshitter.” While folk wisdom may assert that this is true, the present investigation suggests that the reality is a bit more complicated. Our primary aim was to examine the extent to which bullshitting frequency is associated with susceptibility to falling for bullshit. Overall, we found that persuasive bullshitters (but not evasive bullshitters) were more receptive to various types of bullshit and, in the case of pseudo-profound statements, even when controlling for factors related to intelligence and analytic thinking. These results enrich our understanding of the transmission and detection of certain types of misleading information, specifically the associations between the propensity to produce and the tendency to fall for bullshit and will help to inform future research in this growing area of scholarship.



Wednesday, July 19, 2023

Accuracy and social motivations shape judgements of (mis)information

Rathje, S., Roozenbeek, J., Van Bavel, J.J. et al.
Nat Hum Behav 7, 892–903 (2023).

Abstract

The extent to which belief in (mis)information reflects lack of knowledge versus a lack of motivation to be accurate is unclear. Here, across four experiments (n = 3,364), we motivated US participants to be accurate by providing financial incentives for correct responses about the veracity of true and false political news headlines. Financial incentives improved accuracy and reduced partisan bias in judgements of headlines by about 30%, primarily by increasing the perceived accuracy of true news from the opposing party (d = 0.47). Incentivizing people to identify news that would be liked by their political allies, however, decreased accuracy. Replicating prior work, conservatives were less accurate at discerning true from false headlines than liberals, yet incentives closed the gap in accuracy between conservatives and liberals by 52%. A non-financial accuracy motivation intervention was also effective, suggesting that motivation-based interventions are scalable. Altogether, these results suggest that a substantial portion of people’s judgements of the accuracy of news reflects motivational factors.

Conclusions

There is a sizeable partisan divide in the kind of news liberals and conservatives believe in, and conservatives tend to believe in and share more false news than liberals. Our research suggests these differences are not immutable. Motivating people to be accurate improves accuracy about the veracity of true (but not false) news headlines, reduces partisan bias and closes a substantial portion of the gap in accuracy between liberals and conservatives. Theoretically, these results identify accuracy and social motivations as key factors in driving news belief and sharing. Practically, these results suggest that shifting motivations may be a useful strategy for creating a shared reality across the political spectrum.

Key findings
  • Accuracy motivations: Participants who were motivated to be accurate were more likely to correctly identify true and false news headlines.
  • Social motivations: Participants who were motivated to identify news that would be liked by their political allies were less likely to correctly identify true and false news headlines.
  • Combination of motivations: Participants who were motivated by both accuracy and social motivations were more likely to correctly identify true news headlines from the opposing political party.

Thursday, June 15, 2023

Moralization and extremism robustly amplify myside sharing

Marie, A, Altay, S., et al.
PNAS Nexus, Volume 2, Issue 4, April 2023.

Abstract

We explored whether moralization and attitude extremity may amplify a preference to share politically congruent (“myside”) partisan news and what types of targeted interventions may reduce this tendency. Across 12 online experiments (N = 6,989), we examined decisions to share news touching on the divisive issues of gun control, abortion, gender and racial equality, and immigration. Myside sharing was systematically observed and was consistently amplified when participants (i) moralized and (ii) were attitudinally extreme on the issue. The amplification of myside sharing by moralization also frequently occurred above and beyond that of attitude extremity. These effects generalized to both true and fake partisan news. We then examined a number of interventions meant to curb myside sharing by manipulating (i) the audience to which people imagined sharing partisan news (political friends vs. foes), (ii) the anonymity of the account used (anonymous vs. personal), (iii) a message warning against the myside bias, and (iv) a message warning against the reputational costs of sharing “mysided” fake news coupled with an interactive rating task. While some of those manipulations slightly decreased sharing in general and/or the size of myside sharing, the amplification of myside sharing by moral attitudes was consistently robust to these interventions. Our findings regarding the robust exaggeration of selective communication by morality and extremism offer important insights into belief polarization and the spread of partisan and false information online.

General discussion

Across 12 experiments (N = 6,989), we explored US participants’ intentions to share true and fake partisan news on 5 controversial issues—gun control, abortion, racial equality, sex equality, and immigration—in social media contexts. Our experiments consistently show that people have a strong sharing preference for politically congruent news—Democrats even more so than Republicans. They also demonstrate that this “myside” sharing is magnified when respondents see the issue as being of “absolute moral importance”, and when they have an extreme attitude on the issue. Moreover, issue moralization was found to amplify myside sharing above and beyond attitude extremity in the majority of the studies. Expanding prior research on selective communication, our work provides a clear demonstration that citizens’ myside communicational preference is powerfully amplified by their moral and political ideology (18, 19, 39–43).

By examining this phenomenon across multiple experiments varying numerous parameters, we demonstrated the robustness of myside sharing and of its amplification by participants’ issue moralization and attitude extremity. First, those effects were consistently observed on both true (Experiments 1, 2, 3, 5a, 6a, 7, and 10) and fake (Experiments 4, 5b, 6b, 8, 9, and 10) news stories and across distinct operationalizations of our outcome variable. Moreover, myside sharing and its amplification by issue moralization and attitude extremity were systematically observed despite multiple manipulations of the sharing context. Namely, those effects were observed whether sharing was done from one's personal or an anonymous social media account (Experiments 5a and 5b), whether the audience was made of political friends or foes (Experiments 6a and 6b), and whether participants first saw intervention messages warning against the myside bias (Experiments 7 and 8), or an interactive intervention warning against the reputational costs of sharing mysided falsehoods (Experiments 9 and 10).

Saturday, August 13, 2022

The moral psychology of misinformation: Why we excuse dishonesty in a post-truth world

Effron, D.A., & Helgason, B. A.
Current Opinion in Psychology
Volume 47, October 2022, 101375

Abstract

Commentators say we have entered a “post-truth” era. As political lies and “fake news” flourish, citizens appear not only to believe misinformation, but also to condone misinformation they do not believe. The present article reviews recent research on three psychological factors that encourage people to condone misinformation: partisanship, imagination, and repetition. Each factor relates to a hallmark of “post-truth” society: political polarization, leaders who push “alterative facts,” and technology that amplifies disinformation. By lowering moral standards, convincing people that a lie's “gist” is true, or dulling affective reactions, these factors not only reduce moral condemnation of misinformation, but can also amplify partisan disagreement. We discuss implications for reducing the spread of misinformation.

Repeated exposure to misinformation reduces moral condemnation

A third hallmark of a post-truth society is the existence of technologies, such as social media platforms, that amplify misinformation. Such technologies allow fake news – “articles that are intentionally and verifiably false and that could mislead readers” – to spread fast and far, sometimes in multiple periods of intense “contagion” across time. When fake news does “go viral,” the same person is likely to encounter the same piece of misinformation multiple times. Research suggests that these multiple encounters may make the misinformation seem less unethical to spread.

Conclusion

In a post-truth world, purveyors of misinformation need not convince the public that their lies are true. Instead, they can reduce the moral condemnation they receive by appealing to our politics (partisanship), convincing us a falsehood could have been true or might become true in the future (imagination), or simply exposing us to the same misinformation multiple times (repetition). Partisanship may lower moral standards, partisanship and imagination can both make the broader meaning of the falsehood seem true, and repetition can blunt people's negative affective reaction to falsehoods (see Figure 1). Moreover, because partisan alignment strengthens the effects of imagination and facilitates repeated contact with falsehoods, each of these processes can exacerbate partisan divisions in the moral condemnation of falsehoods. Understanding these effects and their pathways informs interventions aimed at reducing the spread of misinformation.

Ultimately, the line of research we have reviewed offers a new perspective on our post-truth world. Our society is not just post-truth in that people can lie and be believed. We are post-truth in that it is concerningly easy to get a moral pass for dishonesty – even when people know you are lying.

Wednesday, June 16, 2021

Lazy, Not Biased: Susceptibility to Partisan Fake News Is Better Explained by Lack of Reasoning Than by Motivated Reasoning

Pennycook, G. & Rand, D. G.
Cognition. (2019)
Volume 188, July 2019, Pages 39-50

Abstract

Why do people believe blatantly inaccurate news headlines (“fake news”)? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3,446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news – even for headlines that align with individuals’ political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant’s ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one’s political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se – a finding that opens potential avenues for fighting fake news.

Highlights

• Participants rated perceived accuracy of fake and real news headlines.

• Analytic thinking was associated with ability to discern between fake and real.

• We found no evidence that analytic thinking exacerbates motivated reasoning.

• Falling for fake news is more a result of a lack of thinking than partisanship.

Friday, February 19, 2021

The Cognitive Science of Fake News

Pennycook, G., & Rand, D. G. 
(2020, November 18). 

Abstract

We synthesize a burgeoning literature investigating why people believe and share “fake news” and other misinformation online. Surprisingly, the evidence contradicts a common narrative whereby partisanship and politically motivated reasoning explain failures to discern truth from falsehood. Instead, poor truth discernment is linked to a lack of careful reasoning and relevant knowledge, and to the use of familiarity and other heuristics. Furthermore, there is a substantial disconnect between what people believe and what they will share on social media. This dissociation is largely driven by inattention, rather than purposeful sharing of misinformation. As a result, effective interventions can nudge social media users to think about accuracy, and can leverage crowdsourced veracity ratings to improve social media ranking algorithms.

From the Discussion

Indeed, recent research shows that a simple accuracy nudge intervention –specifically, having participants rate the accuracy of a single politically neutral headline (ostensibly as part of a pretest) prior to making judgments about social media sharing –improves the extent to which people discern between true and false news content when deciding what to share online in survey experiments. This approach has also been successfully deployed in a large-scale field experiment on Twitter, in which messages asking users to rate the accuracy of a random headline were sent to thousands of accounts who recently shared links to misinformation sites. This subtle nudge significantly increased the quality of the content they subsequently shared; see Figure3B. Furthermore, survey experiments have shown that asking participants to explain how they know if a headline is true of false before sharing it increases sharing discernment, and having participants rate accuracy at the time of encoding protects against familiarity effects."

Saturday, October 24, 2020

Trump's Strangest Lie: A Plague of Suicides Under His Watch

Gilad Edelman
wired.com
Originally published 23 Oct 2020

IN LAST NIGHT’S presidential debate, Donald Trump repeated one of his more unorthodox reelection pitches. “People are losing their jobs,” he said. “They’re committing suicide. There’s depression, alcohol, drugs at a level that nobody’s ever seen before.”

It’s strange to hear an incumbent president declare, as an argument in his own favor, that a wave of suicides is occurring under his watch. It’s even stranger given that it’s not true. While Trump has been warning since March that any pandemic lockdowns would lead to “suicides by the thousands,” several studies from abroad have found that when governments imposed such restrictions in the early waves of the pandemic, there was no corresponding increase in these deaths. In fact, suicide rates may even have declined. A preprint study released earlier this week found that the suicide rate in Massachusetts didn’t budge even as that state imposed a strong stay-at-home order in March, April, and May.

(cut)

Add this to the list of tragic ironies of the Trump era: The president is using the nonexistent link between lockdowns and suicide to justify an agenda that really could cause more people to take their own lives.

Friday, July 10, 2020

Aging in an Era of Fake News

Brashier, N. M., & Schacter, D. L. (2020).
Current Directions in 
Psychological Science, 29(3), 316–323.

Abstract

Misinformation causes serious harm, from sowing doubt in modern medicine to inciting violence. Older adults are especially susceptible—they shared the most fake news during the 2016 U.S. election. The most intuitive explanation for this pattern lays the blame on cognitive deficits. Although older adults forget where they learned information, fluency remains intact, and knowledge accumulated across decades helps them evaluate claims. Thus, cognitive declines cannot fully explain older adults’ engagement with fake news. Late adulthood also involves social changes, including greater trust, difficulty detecting lies, and less emphasis on accuracy when communicating. In addition, older adults are relative newcomers to social media and may struggle to spot sponsored content or manipulated images. In a post-truth world, interventions should account for older adults’ shifting social goals and gaps in their digital literacy.

(cut)

The focus on “facts” at the expense of long-term trust is one reason why I see news organizations being ineffective in preventing, and in some cases facilitating, the establishment of “alternative narratives”. News reporting, as with any other type of declaration, can be ideologically, politically, and emotionally contested. The key differences in the current environment involve speed and transparency: First, people need to be exposed to the facts before the narrative can be strategically distorted through social media, distracting “leaks”, troll operations, and meme warfare. Second, while technological solutions for “fake news” are a valid effort, platforms policing content through opaque technologies adds yet another disruption in the layer of trust that should be reestablished directly between news organizations and their audiences.

A pdf can be found here.

Thursday, May 7, 2020

What Is 'Decision Fatigue' and How Does It Affect You?

Rachel Fairbank
LifeHacker
Originally published 14 April 20

Here is an excerpt:

Too many decisions result in emotional and mental strain

“These are legitimately difficult decisions,” Fischhoff says, adding that people shouldn’t feel bad about struggling with them. “Feeling bad is adding insult to injury,” he says.

This added complexity to our decisions is leading to decision fatigue, which is the emotional and mental strain that comes when we are forced to make too many choices. Decision fatigue is the reason why thinking through a decision is harder when we are stressed or tired.

“These are difficult decisions because the stakes are often really high, while we are required to master unfamiliar information,” Fischhoff says.

But if all of this sounds like too much, there are actions we can take to reduce decision fatigue. For starters, it’s best to minimize the number of small decisions, such as what to eat for dinner or what to wear, you make in a day. The fewer smaller decisions you have to make, the more bandwidth you’ll have for the bigger one.

For this particular crisis, there are a few more steps you can take, in order to reduce your decision fatigue.

The info is here.

Tuesday, December 31, 2019

Our Brains Are No Match for Our Technology

Tristan Harris
The New York Times
Originally posted 5 Dec 19

Here is an excerpt:

Our Paleolithic brains also aren’t wired for truth-seeking. Information that confirms our beliefs makes us feel good; information that challenges our beliefs doesn’t. Tech giants that give us more of what we click on are intrinsically divisive. Decades after splitting the atom, technology has split society into different ideological universes.

Simply put, technology has outmatched our brains, diminishing our capacity to address the world’s most pressing challenges. The advertising business model built on exploiting this mismatch has created the attention economy. In return, we get the “free” downgrading of humanity.

This leaves us profoundly unsafe. With two billion humans trapped in these environments, the attention economy has turned us into a civilization maladapted for its own survival.

Here’s the good news: We are the only species self-aware enough to identify this mismatch between our brains and the technology we use. Which means we have the power to reverse these trends.

The question is whether we can rise to the challenge, whether we can look deep within ourselves and use that wisdom to create a new, radically more humane technology. “Know thyself,” the ancients exhorted. We must bring our godlike technology back into alignment with an honest understanding of our limits.

This may all sound pretty abstract, but there are concrete actions we can take.

The info is here.

Saturday, December 14, 2019

The Dark Psychology of Social Networks

Jonathan Haidt and Tobias Rose-Stockwell
The Atlantic
Originally posted December 2019

Her are two excerpts:

Human beings evolved to gossip, preen, manipulate, and ostracize. We are easily lured into this new gladiatorial circus, even when we know that it can make us cruel and shallow. As the Yale psychologist Molly Crockett has argued, the normal forces that might stop us from joining an outrage mob—such as time to reflect and cool off, or feelings of empathy for a person being humiliated—are attenuated when we can’t see the person’s face, and when we are asked, many times a day, to take a side by publicly “liking” the condemnation.

In other words, social media turns many of our most politically engaged citizens into Madison’s nightmare: arsonists who compete to create the most inflammatory posts and images, which they can distribute across the country in an instant while their public sociometer displays how far their creations have traveled.

(cut)

Twitter also made a key change in 2009, adding the “Retweet” button. Until then, users had to copy and paste older tweets into their status updates, a small obstacle that required a few seconds of thought and attention. The Retweet button essentially enabled the frictionless spread of content. A single click could pass someone else’s tweet on to all of your followers—and let you share in the credit for contagious content. In 2012, Facebook offered its own version of the retweet, the “Share” button, to its fastest-growing audience: smartphone users.

Chris Wetherell was one of the engineers who created the Retweet button for Twitter. He admitted to BuzzFeed earlier this year that he now regrets it. As Wetherell watched the first Twitter mobs use his new tool, he thought to himself: “We might have just handed a 4-year-old a loaded weapon.”

The coup de grâce came in 2012 and 2013, when Upworthy and other sites began to capitalize on this new feature set, pioneering the art of testing headlines across dozens of variations to find the version that generated the highest click-through rate. This was the beginning of “You won’t believe …” articles and their ilk, paired with images tested and selected to make us click impulsively. These articles were not usually intended to cause outrage (the founders of Upworthy were more interested in uplift). But the strategy’s success ensured the spread of headline testing, and with it emotional story-packaging, through new and old media alike; outrageous, morally freighted headlines proliferated in the following years.

The info is here.

Tuesday, December 10, 2019

AI Deemed 'Too Dangerous To Release' Makes It Out Into The World

Andrew Griffin
independent.co.uk
Originally posted November 8, 2019

Here is an excerpt:

"Due to our concerns about malicious applications of the technology, we are not releasing the trained model," wrote OpenAI in a February blog post, released when it made the announcement. "As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."

At that time, the organisation released only a very limited version of the tool, which used 124 million parameters. It has released more complex versions ever since, and has now made the full version available.

The full version is more convincing than the smaller one, but only "marginally". The relatively limited increase in credibility was part of what encouraged the researchers to make it available, they said.

It hopes that the release can partly help the public understand how such a tool could be misused, and help inform discussions among experts about how that danger can be mitigated.

In February, researchers said that there was a variety of ways that malicious people could misuse the programme. The outputted text could be used to create misleading news articles, impersonate other people, automatically create abusive or fake content for social media or to use to spam people with – along with a variety of possible uses that might not even have been imagined yet, they noted.

Such misuses would require the public to become more critical about the text they read online, which could have been generated by artificial intelligence, they said.

The info is here.

Thursday, December 5, 2019

How Misinformation Spreads--and Why We Trust It

Cailin O'Connor and James Owen Weatherall
Scientific American
Originally posted September 2019

Here is an excerpt:

Many communication theorists and social scientists have tried to understand how false beliefs persist by modeling the spread of ideas as a contagion. Employing mathematical models involves simulating a simplified representation of human social interactions using a computer algorithm and then studying these simulations to learn something about the real world. In a contagion model, ideas are like viruses that go from mind to mind.

You start with a network, which consists of nodes, representing individuals, and edges, which represent social connections.  You seed an idea in one “mind” and see how it spreads under various assumptions about when transmission will occur.

Contagion models are extremely simple but have been used to explain surprising patterns of behavior, such as the epidemic of suicide that reportedly swept through Europe after publication of Goethe's The Sorrows of Young Werther in 1774 or when dozens of U.S. textile workers in 1962 reported suffering from nausea and numbness after being bitten by an imaginary insect. They can also explain how some false beliefs propagate on the Internet.

Before the last U.S. presidential election, an image of a young Donald Trump appeared on Facebook. It included a quote, attributed to a 1998 interview in People magazine, saying that if Trump ever ran for president, it would be as a Republican because the party is made up of “the dumbest group of voters.” Although it is unclear who “patient zero” was, we know that this meme passed rapidly from profile to profile.

The meme's veracity was quickly evaluated and debunked. The fact-checking Web site Snopes reported that the quote was fabricated as early as October 2015. But as with the tomato hornworm, these efforts to disseminate truth did not change how the rumors spread. One copy of the meme alone was shared more than half a million times. As new individuals shared it over the next several years, their false beliefs infected friends who observed the meme, and they, in turn, passed the false belief on to new areas of the network.

This is why many widely shared memes seem to be immune to fact-checking and debunking. Each person who shared the Trump meme simply trusted the friend who had shared it rather than checking for themselves.

Putting the facts out there does not help if no one bothers to look them up. It might seem like the problem here is laziness or gullibility—and thus that the solution is merely more education or better critical thinking skills. But that is not entirely right.

Sometimes false beliefs persist and spread even in communities where everyone works very hard to learn the truth by gathering and sharing evidence. In these cases, the problem is not unthinking trust. It goes far deeper than that.

The info is here.

Wednesday, July 31, 2019

The “Fake News” Effect: An Experiment on Motivated Reasoning and Trust in News

Michael Thaler
Harvard University
Originally published May 28, 2019

Abstract

When people receive information about controversial issues such as immigration policies, upward mobility, and racial discrimination, the information often evokes both what they currently believe and what they are motivated to believe. This paper theoretically and experimentally explores the importance in inference of this latter channel: motivated reasoning. In the theory of motivated reasoning this paper develops, people misupdate from information by treating their motivated beliefs as an extra signal. To test the theory, I create a new experimental design in which people make inferences about the veracity of news sources. This design is unique in that it identifies motivated reasoning from Bayesian updating and confirmation bias, and doesn’t require elicitation of people’s entire belief distribution. It is also very portable: In a large online experiment, I find the first identifying evidence for politically-driven motivated reasoning on eight different economic and social issues. Motivated reasoning leads people to become more polarized, less accurate, and more overconfident in their beliefs about these issues.

From the Conclusion:

One interpretation of this paper is unambiguously bleak: People of all demographics similarly motivatedly reason, do so on essentially every topic they are asked about, and make particularly biased inferences on issues they find important. However, there is an alternative interpretation: This experiment takes a step towards better understanding motivated reasoning, and makes it easier for future work to attenuate the bias. Using this experimental design, we can identify and estimate the magnitude of the bias; future projects that use interventions to attempt to mitigate motivated reasoning can use this estimated magnitude as an outcome variable. Since the bias does decrease utility in at least some settings, people may have demand for such interventions.

The research is here.

Thursday, June 27, 2019

This doctor is recruiting an army of medical experts to drown out fake health news on Instagram and Twitter

Christine Farr
CNBC.com
Originally published June 2, 2019

The antidote to fake health news? According to Austin Chiang, the first chief medical social media officer at a top hospital, it’s to drown out untrustworthy content with tweets, pics and posts from medical experts that the average American can relate to.

Chiang is a Harvard-trained gastroenterologist with a side passion for social media. On Instagram, where he refers to himself as a “GI Doctor,” he has 20,000 followers, making him one of the most influential docs aside from TV personalities, plastic surgeons and New York’s so-called “most eligible bachelor,” Dr. Mike.

Every few days, he’ll share a selfie or a photo of himself in scrubs along with captions about the latest research or insights from conferences he attends, or advice to patients trying to sort our real information from rumors. He’s also active on Twitter, Microsoft’s LinkedIn and Facebook (which owns Instagram).

But Chiang recognizes that his following pales in comparison to accounts like “Medical Medium,” where two million people tune in to the musings of a psychic, who raves about vegetables that will cure diseases ranging from depression to diabetes. (Gwyneth Paltrow’s Goop has written about the account’s creator glowingly.) Or on Pinterest and Facebook, where anti-vaccination content has been far more prominent than legitimate public health information. Meanwhile, on e-commerce sites like Amazon and eBay, vendors have hawked unproven and dangerous health “cures, ” including an industrial-strength bleach that is billed as eliminating autism in children.

The info is here.

Tuesday, June 25, 2019

Truth by Repetition: Explanations and Implications

Unkelbach, C., Koch, A., Silva, R. R., & Garcia-Marques, T. (2019).
Current Directions in Psychological Science, 28(3), 247–253. https://doi.org/10.1177/0963721419827854

Abstract

People believe repeated information more than novel information; they show a repetition-induced truth effect. In a world of “alternative facts,” “fake news,” and strategic information management, understanding this effect is highly important. We first review explanations of the effect based on frequency, recognition, familiarity, and coherent references. On the basis of the latter explanation, we discuss the relations of these explanations. We then discuss implications of truth by repetition for the maintenance of false beliefs and ways to change potentially harmful false beliefs (e.g., “Vaccination causes autism”), illustrating that the truth-by-repetition phenomenon not only is of theoretical interest but also has immediate practical relevance.

Here is a portion of the closing section:

No matter which mental processes may underlie the repetition-induced truth effect, on a functional level, repetition increases subjective truth. The effect’s robustness may be worrisome if one considers that information nowadays is not randomly but strategically repeated. For example, the phenomenon of the “filter bubble” (Pariser, 2011) suggests that people get verbatim and paraphrased repetition only of what they already know and believe. As discussed, logically, this should not strengthen information’s subjective truth. However, as discussed above, repetition does influence subjective truth psychologically. In combination with phenomena such as selective exposure (e.g., Frey, 1986), confirmation biases (e.g., Nickerson, 1998), or failures to consider the opposite (e.g., Schul, Mayo, & Burnstein, 2004), it becomes apparent how even blatantly false information may come “to fix itself in the mind in such a way that it is accepted in the end as a demonstrated truth” (Le Bon, 1895/1996). For example, within the frame of a referential theory, filter bubbles repeat information and thereby add supporting coherent references for existing belief networks, which makes them difficult to change once they are established. Simultaneously, people should also process such information more fluently. In the studies reviewed here, statement content was mostly trivia. Yet, even for this trivia, participants evaluated contradictory information as being less true compared with novel information, even when they were explicitly told that it was 100% false (Unkelbach & Greifeneder, 2018). If one considers how many corresponding references the information that “vaccination leads to autism” may instigate for parents who must decide whether to vaccinate or not, the relevance of the truth-by-repetition phenomenon becomes apparent.

Wednesday, March 20, 2019

Should This Exist? The Ethics Of New Technology

Lulu Garcia-Navarro
www.NPR.org
Originally posted March 3, 2019

Not every new technology product hits the shelves.

Tech companies kill products and ideas all the time — sometimes it's because they don't work, sometimes there's no market.

Or maybe, it might be too dangerous.

Recently, the research firm OpenAI announced that it would not be releasing a version of a text generator they developed, because of fears that it could be misused to create fake news. The text generator was designed to improve dialogue and speech recognition in artificial intelligence technologies.

The organization's GPT-2 text generator can generate paragraphs of coherent, continuing text based off of a prompt from a human. For example, when inputted with the claim, "John F. Kennedy was just elected President of the United States after rising from the grave decades after his assassination," the generator spit out the transcript of "his acceptance speech" that read in part:
It is time once again. I believe this nation can do great things if the people make their voices heard. The men and women of America must once more summon our best elements, all our ingenuity, and find a way to turn such overwhelming tragedy into the opportunity for a greater good and the fulfillment of all our dreams.
Considering the serious issues around fake news and online propaganda that came to light during the 2016 elections, it's easy to see how this tool could be used for harm.

The info is here.