Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Social Media. Show all posts
Showing posts with label Social Media. Show all posts

Thursday, July 15, 2021

Overconfidence in news judgments is associated with false news susceptibility

B. A. Lyons, et al.
PNAS, Jun 2021, 118 (23) e2019527118
DOI: 10.1073/pnas.2019527118

Abstract

We examine the role of overconfidence in news judgment using two large nationally representative survey samples. First, we show that three in four Americans overestimate their relative ability to distinguish between legitimate and false news headlines; respondents place themselves 22 percentiles higher than warranted on average. This overconfidence is, in turn, correlated with consequential differences in real-world beliefs and behavior. We show that overconfident individuals are more likely to visit untrustworthy websites in behavioral data; to fail to successfully distinguish between true and false claims about current events in survey questions; and to report greater willingness to like or share false content on social media, especially when it is politically congenial. In all, these results paint a worrying picture: The individuals who are least equipped to identify false news content are also the least aware of their own limitations and, therefore, more susceptible to believing it and spreading it further.

Significance

Although Americans believe the confusion caused by false news is extensive, relatively few indicate having seen or shared it—a discrepancy suggesting that members of the public may not only have a hard time identifying false news but fail to recognize their own deficiencies at doing so. If people incorrectly see themselves as highly skilled at identifying false news, they may unwittingly participate in its circulation. In this large-scale study, we show that not only is overconfidence extensive, but it is also linked to both self-reported and behavioral measures of false news website visits, engagement, and belief. Our results suggest that overconfidence may be a crucial factor for explaining how false and low-quality information spreads via social media.

Wednesday, June 16, 2021

Lazy, Not Biased: Susceptibility to Partisan Fake News Is Better Explained by Lack of Reasoning Than by Motivated Reasoning

Pennycook, G. & Rand, D. G.
Cognition. (2019)
Volume 188, July 2019, Pages 39-50

Abstract

Why do people believe blatantly inaccurate news headlines (“fake news”)? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3,446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news – even for headlines that align with individuals’ political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant’s ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one’s political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se – a finding that opens potential avenues for fighting fake news.

Highlights

• Participants rated perceived accuracy of fake and real news headlines.

• Analytic thinking was associated with ability to discern between fake and real.

• We found no evidence that analytic thinking exacerbates motivated reasoning.

• Falling for fake news is more a result of a lack of thinking than partisanship.

Sunday, March 28, 2021

Negativity Spreads More than Positivity on Twitter after both Positive and Negative Political Situations

Schöne, J., Parkinson, B., & Goldenberg, A. 
(2021, January 2). 
https://doi.org/10.31234/osf.io/x9e7u

Abstract

What type of emotional language spreads further in political discourses on social media? Previous research has focused on situations that primarily elicited negative emotions, showing that negative language tended to spread further. The current project addressed the gap introduced when looking only at negative situations by comparing the spread of emotional language in response to both predominantly positive and negative political situations. In Study 1, we examined the spread of emotional language among tweets related to the winning and losing parties in the 2016 US elections, finding that increased negativity (but not positivity) predicted content sharing in both situations. In Study 2, we compared the spread of emotional language in two separate situations: the celebration of the US Supreme Court approval of same-sex marriage (positive), and the Ferguson Unrest (negative), finding again that negativity spread further. These results shed light on the nature of political discourse and engagement.

General Discussion

The goal of the project was to investigate what types of emotional language spread further in response to negative and positive political situations. In Studies 1 (same situation) and 2 (separate situations),we examined the spread of emotional language in response to negative and positive situations. Results from both of our studies suggested that negative language tended to spread further both in negative and positive situations. Analysis of political affiliation in both studies indicated that the users that produced the negative language in the political celebrations were ingroup members (conservatives in Study 1 and liberals in Study 2). Analysis of negative content produced in celebrations shows that negative language was mainly used to describe hardships or past obstacles. Combined, these two studies shed light on the nature of political engagement online. 

Tuesday, March 9, 2021

How social learning amplifies moral outrage expression in online social networks

Brady, W. J., McLoughlin, K. L., et al.
(2021, January 19).
https://doi.org/10.31234/osf.io/gf7t5

Abstract

Moral outrage shapes fundamental aspects of human social life and is now widespread in online social networks. Here, we show how social learning processes amplify online moral outrage expressions over time. In two pre-registered observational studies of Twitter (7,331 users and 12.7 million total tweets) and two pre-registered behavioral experiments (N = 240), we find that positive social feedback for outrage expressions increases the likelihood of future outrage expressions, consistent with principles of reinforcement learning. We also find that outrage expressions are sensitive to expressive norms in users’ social networks, over and above users’ own preferences, suggesting that norm learning processes guide online outrage expressions. Moreover, expressive norms moderate social reinforcement of outrage: in ideologically extreme networks, where outrage expression is more common, users are less sensitive to social feedback when deciding whether to express outrage. Our findings highlight how platform design interacts with human learning mechanisms to impact moral discourse in digital public spaces.

From the Conclusion

At first blush, documenting the role of reinforcement learning in online outrage expressions may seem trivial. Of course, we should expect that a fundamental principle of human behavior, extensively observed in offline settings, will similarly describe behavior in online settings. However, reinforcement learning of moral behaviors online, combined with the design of social media platforms, may have especially important social implications. Social media newsfeed algorithms can directly impact how much social feedback a given post receives by determining how many other users are exposed to that post. Because we show here that social feedback impacts users’ outrage expressions over time, this suggests newsfeed algorithms can influence users’ moral behaviors by exploiting their natural tendencies for reinforcement learning.  In this way, reinforcement learning on social media differs from reinforcement learning in other environments because crucial inputs to the learning process are shaped by corporate interests. Even if platform designers do not intend to amplify moral outrage, design choices aimed at satisfying other goals --such as profit maximization via user engagement --can indirectly impact moral behavior because outrage-provoking content draws high engagement. Given that moral outrage plays a critical role in collective action and social change, our data suggest that platform designers have the ability to influence the success or failure of social and political movements, as well as informational campaigns designed to influence users’ moral and political attitudes. Future research is required to understand whether users are aware of this, and whether making such knowledge salient can impact their online behavior.


People are more likely to express online "moral outrage" if they have either been rewarded for it in the past or it's common in their own social network.  They are even willing to express far more moral outrage than they genuinely feel in order to fit in.

Friday, February 19, 2021

The Cognitive Science of Fake News

Pennycook, G., & Rand, D. G. 
(2020, November 18). 

Abstract

We synthesize a burgeoning literature investigating why people believe and share “fake news” and other misinformation online. Surprisingly, the evidence contradicts a common narrative whereby partisanship and politically motivated reasoning explain failures to discern truth from falsehood. Instead, poor truth discernment is linked to a lack of careful reasoning and relevant knowledge, and to the use of familiarity and other heuristics. Furthermore, there is a substantial disconnect between what people believe and what they will share on social media. This dissociation is largely driven by inattention, rather than purposeful sharing of misinformation. As a result, effective interventions can nudge social media users to think about accuracy, and can leverage crowdsourced veracity ratings to improve social media ranking algorithms.

From the Discussion

Indeed, recent research shows that a simple accuracy nudge intervention –specifically, having participants rate the accuracy of a single politically neutral headline (ostensibly as part of a pretest) prior to making judgments about social media sharing –improves the extent to which people discern between true and false news content when deciding what to share online in survey experiments. This approach has also been successfully deployed in a large-scale field experiment on Twitter, in which messages asking users to rate the accuracy of a random headline were sent to thousands of accounts who recently shared links to misinformation sites. This subtle nudge significantly increased the quality of the content they subsequently shared; see Figure3B. Furthermore, survey experiments have shown that asking participants to explain how they know if a headline is true of false before sharing it increases sharing discernment, and having participants rate accuracy at the time of encoding protects against familiarity effects."

Tuesday, July 14, 2020

The MAD Model of Moral Contagion: The role of motivation, attention and design in the spread of moralized content online

Brady WJ, Crockett MJ, Van Bavel JJ.
Perspect Psychol Sci. 2020;1745691620917336.

Abstract

With over 3 billion users, online social networks represent an important venue for moral and political discourse and have been used to organize political revolutions, influence elections, and raise awareness of social issues. These examples rely on a common process in order to be effective: the ability to engage users and spread moralized content through online networks. Here, we review evidence that expressions of moral emotion play an important role in the spread of moralized content (a phenomenon we call ‘moral contagion’). Next, we propose a psychological model to explain moral contagion. The ‘MAD’ model of moral contagion argues that people have group identity-based motivations to share moral-emotional content; that such content is especially likely to capture our attention; and that the design of social media platforms amplifies our natural motivational and cognitive tendencies to spread such content. We review each component of the model (as well as interactions between components) and raise several novel, testable hypotheses that can spark progress on the scientific investigation of civic engagement and activism, political polarization, propaganda and disinformation, and other moralized behaviors in the digital age.

A copy of the research can be found here.

Monday, April 27, 2020

Experiments on Trial

Hannah Fry
The New Yorker
Originally posted 24 Feb 20

Here are two excerpts:

There are also times when manipulation leaves people feeling cheated. For instance, in 2018 the Wall Street Journal reported that Amazon had been inserting sponsored products in its consumers’ baby registries. “The ads look identical to the rest of the listed products in the registry, except for a small gray ‘Sponsored’ tag,” the Journal revealed. “Unsuspecting friends and family clicked on the ads and purchased the items,” assuming they’d been chosen by the expectant parents. Amazon’s explanation when confronted? “We’re constantly experimenting,” a spokesperson said. (The company has since ended the practice.)

But there are times when the experiments go further still, leaving some to question whether they should be allowed at all. There was a notorious experiment run by Facebook in 2012, in which the number of positive and negative posts in six hundred and eighty-nine thousand users’ news feeds was tweaked. The aim was to see how the unwitting participants would react. As it turned out, those who saw less negative content in their feeds went on to post more positive stuff themselves, while those who had positive posts hidden from their feeds used more negative words.

A public backlash followed; people were upset to discover that their emotions had been manipulated. Luca and Bazerman argue that this response was largely misguided. They point out that the effect was small. A person exposed to the negative news feed “ended up writing about four additional negative words out of every 10,000,” they note. Besides, they say, “advertisers and other groups manipulate consumers’ emotions all the time to suit their purposes. If you’ve ever read a Hallmark card, attended a football game or seen a commercial for the ASPCA, you’ve been exposed to the myriad ways in which products and services influence consumers’ emotions.”

(cut)

Medicine has already been through this. In the early twentieth century, without a set of ground rules on how people should be studied, medical experimentation was like the Wild West. Alongside a great deal of good work, a number of deeply unethical studies took place—including the horrifying experiments conducted by the Nazis and the appalling Tuskegee syphilis trial, in which hundreds of African-American men were denied treatment by scientists who wanted to see how the lethal disease developed. As a result, there are now clear rules about seeking informed consent whenever medical experiments use human subjects, and institutional procedures for reviewing the design of such experiments in advance. We’ve learned that researchers aren’t always best placed to assess the potential harm of their work.

The info is here.

Saturday, February 8, 2020

Bursting the Filter Bubble: Democracy, Design, and Ethics

V. E. Bozdag
Book/Thesis
Originally published in 2015

Online web services such as Google and Facebook started using personalization algorithms. Because information is customized per user by the algorithms of these services, two users who use the same search query or have the same friend list may get different results. Online services argue that by using personalization algorithms, they may show the most relevant information for each user, hence increasing user satisfaction. However, critics argue that the opaque filters used by online services will only show agreeable political viewpoints to the users and the users never get challenged by opposing perspectives. Considering users are already biased in seeking like-minded perspectives, viewpoint diversity will diminish and the users may get trapped in a “filter bubble”. This is an undesired behavior for almost all democracy models. In this thesis we first analyzed the filter bubble phenomenon conceptually, by identifying internal processes and factors in online web services that might cause filter bubbles. Later, we analyzed this issue empirically. We first studied existing metrics in viewpoint diversity research of the computer science literature. We also extended these metrics by adding a new one, namely minority access from media and communication studies. After conducting an empirical study for Dutch and Turkish Twitter users, we showed that minorities cannot reach a large percentage of users in Turkish Twittersphere. We also analyzed software tools and design attempts to combat filter bubbles. We showed that almost all of the tools implement norms required by two popular democracy models. We argue that democracy is essentially a contested concept, and other less popular democracy models should be included in the design of such tools as well.

The book/thesis can be downloaded here.

Tuesday, December 31, 2019

Our Brains Are No Match for Our Technology

Tristan Harris
The New York Times
Originally posted 5 Dec 19

Here is an excerpt:

Our Paleolithic brains also aren’t wired for truth-seeking. Information that confirms our beliefs makes us feel good; information that challenges our beliefs doesn’t. Tech giants that give us more of what we click on are intrinsically divisive. Decades after splitting the atom, technology has split society into different ideological universes.

Simply put, technology has outmatched our brains, diminishing our capacity to address the world’s most pressing challenges. The advertising business model built on exploiting this mismatch has created the attention economy. In return, we get the “free” downgrading of humanity.

This leaves us profoundly unsafe. With two billion humans trapped in these environments, the attention economy has turned us into a civilization maladapted for its own survival.

Here’s the good news: We are the only species self-aware enough to identify this mismatch between our brains and the technology we use. Which means we have the power to reverse these trends.

The question is whether we can rise to the challenge, whether we can look deep within ourselves and use that wisdom to create a new, radically more humane technology. “Know thyself,” the ancients exhorted. We must bring our godlike technology back into alignment with an honest understanding of our limits.

This may all sound pretty abstract, but there are concrete actions we can take.

The info is here.

Saturday, December 14, 2019

The Dark Psychology of Social Networks

Jonathan Haidt and Tobias Rose-Stockwell
The Atlantic
Originally posted December 2019

Her are two excerpts:

Human beings evolved to gossip, preen, manipulate, and ostracize. We are easily lured into this new gladiatorial circus, even when we know that it can make us cruel and shallow. As the Yale psychologist Molly Crockett has argued, the normal forces that might stop us from joining an outrage mob—such as time to reflect and cool off, or feelings of empathy for a person being humiliated—are attenuated when we can’t see the person’s face, and when we are asked, many times a day, to take a side by publicly “liking” the condemnation.

In other words, social media turns many of our most politically engaged citizens into Madison’s nightmare: arsonists who compete to create the most inflammatory posts and images, which they can distribute across the country in an instant while their public sociometer displays how far their creations have traveled.

(cut)

Twitter also made a key change in 2009, adding the “Retweet” button. Until then, users had to copy and paste older tweets into their status updates, a small obstacle that required a few seconds of thought and attention. The Retweet button essentially enabled the frictionless spread of content. A single click could pass someone else’s tweet on to all of your followers—and let you share in the credit for contagious content. In 2012, Facebook offered its own version of the retweet, the “Share” button, to its fastest-growing audience: smartphone users.

Chris Wetherell was one of the engineers who created the Retweet button for Twitter. He admitted to BuzzFeed earlier this year that he now regrets it. As Wetherell watched the first Twitter mobs use his new tool, he thought to himself: “We might have just handed a 4-year-old a loaded weapon.”

The coup de grâce came in 2012 and 2013, when Upworthy and other sites began to capitalize on this new feature set, pioneering the art of testing headlines across dozens of variations to find the version that generated the highest click-through rate. This was the beginning of “You won’t believe …” articles and their ilk, paired with images tested and selected to make us click impulsively. These articles were not usually intended to cause outrage (the founders of Upworthy were more interested in uplift). But the strategy’s success ensured the spread of headline testing, and with it emotional story-packaging, through new and old media alike; outrageous, morally freighted headlines proliferated in the following years.

The info is here.

Thursday, December 5, 2019

How Misinformation Spreads--and Why We Trust It

Cailin O'Connor and James Owen Weatherall
Scientific American
Originally posted September 2019

Here is an excerpt:

Many communication theorists and social scientists have tried to understand how false beliefs persist by modeling the spread of ideas as a contagion. Employing mathematical models involves simulating a simplified representation of human social interactions using a computer algorithm and then studying these simulations to learn something about the real world. In a contagion model, ideas are like viruses that go from mind to mind.

You start with a network, which consists of nodes, representing individuals, and edges, which represent social connections.  You seed an idea in one “mind” and see how it spreads under various assumptions about when transmission will occur.

Contagion models are extremely simple but have been used to explain surprising patterns of behavior, such as the epidemic of suicide that reportedly swept through Europe after publication of Goethe's The Sorrows of Young Werther in 1774 or when dozens of U.S. textile workers in 1962 reported suffering from nausea and numbness after being bitten by an imaginary insect. They can also explain how some false beliefs propagate on the Internet.

Before the last U.S. presidential election, an image of a young Donald Trump appeared on Facebook. It included a quote, attributed to a 1998 interview in People magazine, saying that if Trump ever ran for president, it would be as a Republican because the party is made up of “the dumbest group of voters.” Although it is unclear who “patient zero” was, we know that this meme passed rapidly from profile to profile.

The meme's veracity was quickly evaluated and debunked. The fact-checking Web site Snopes reported that the quote was fabricated as early as October 2015. But as with the tomato hornworm, these efforts to disseminate truth did not change how the rumors spread. One copy of the meme alone was shared more than half a million times. As new individuals shared it over the next several years, their false beliefs infected friends who observed the meme, and they, in turn, passed the false belief on to new areas of the network.

This is why many widely shared memes seem to be immune to fact-checking and debunking. Each person who shared the Trump meme simply trusted the friend who had shared it rather than checking for themselves.

Putting the facts out there does not help if no one bothers to look them up. It might seem like the problem here is laziness or gullibility—and thus that the solution is merely more education or better critical thinking skills. But that is not entirely right.

Sometimes false beliefs persist and spread even in communities where everyone works very hard to learn the truth by gathering and sharing evidence. In these cases, the problem is not unthinking trust. It goes far deeper than that.

The info is here.

Monday, November 25, 2019

The MAD Model of Moral Contagion: The role of motivation, attention and design in the spread of moralized content online

William Brady, Molly Crockett, and Jay Van Bavel
PsyArXiv
Originally posted March 11, 2019

Abstract

With over 3 billion users, online social networks represent an important venue for moral and political discourse and have been used to organize political revolutions, influence elections, and raise awareness of social issues. These examples rely on a common process in order to be effective: the ability to engage users and spread moralized content through online networks. Here, we review evidence that expressions of moral emotion play an important role in the spread of moralized content (a phenomenon we call ‘moral contagion’). Next, we propose a psychological model to explain moral contagion. The ‘MAD’ model of moral contagion argues that people have group identity-based motivations to share moral-emotional content; that such content is especially likely to capture our attention; and that the design of social media platforms amplifies our natural motivational and cognitive tendencies to spread such content. We review each component of the model (as well as interactions between components) and raise several novel, testable hypotheses that can spark progress on the scientific investigation of civic engagement and activism, political polarization, propaganda and disinformation, and other moralized behaviors in the digital age.

The research is here.

Saturday, November 16, 2019

Moral grandstanding in public discourse: Status-seeking motives as a potential explanatory mechanism in predicting conflict

Grubbs JB, Warmke B, Tosi J, James AS, Campbell WK
(2019) PLoS ONE 14(10): e0223749.
https://doi.org/10.1371/journal.pone.0223749

Abstract

Public discourse is often caustic and conflict-filled. This trend seems to be particularly evident when the content of such discourse is around moral issues (broadly defined) and when the discourse occurs on social media. Several explanatory mechanisms for such conflict have been explored in recent psychological and social-science literatures. The present work sought to examine a potentially novel explanatory mechanism defined in philosophical literature: Moral Grandstanding. According to philosophical accounts, Moral Grandstanding is the use of moral talk to seek social status. For the present work, we conducted six studies, using two undergraduate samples (Study 1, N = 361; Study 2, N = 356); a sample matched to U.S. norms for age, gender, race, income, Census region (Study 3, N = 1,063); a YouGov sample matched to U.S. demographic norms (Study 4, N = 2,000); and a brief, one-month longitudinal study of Mechanical Turk workers in the U.S. (Study 5, Baseline N = 499, follow-up n = 296), and a large, one-week YouGov sample matched to U.S. demographic norms (Baseline N = 2,519, follow-up n = 1,776). Across studies, we found initial support for the validity of Moral Grandstanding as a construct. Specifically, moral grandstanding motivation was associated with status-seeking personality traits, as well as greater political and moral conflict in daily life.

Conclusion

Public discourse regarding morally charged topics is prone to conflict and polarization, particularly on social media platforms that tend to facilitate ideological echo chambers. The present study introduces an interdisciplinary construct called Moral Grandstanding as possible a contributing factor to this phenomenon. MG links various domains of psychology with moral philosophy to describe the use of public moral speech to enhance one’s status or image in the eyes of others. Within the present work, we focused on the motivation to engage in MG. Specifically, MG Motivation is framed as an expression of status-seeking drives in the domain of public discourse. Self-reported motivations underlying grandstanding behaviors seem to be consistent with the construct of status-seeking more broadly, seeming to represent prestige and dominance striving, both of which were found to be associated with greater interpersonal conflict and polarization. These results were consistently replicated in samples of U.S. undergraduates and nationally representative cross-sectional of U.S. residents, and longitudinal studies of adults in the U.S. Collectively, these results suggest that MG Motivation is a useful psychological phenomenon that has potential to aid our understanding of the intraindividual mechanisms driving caustic public discourse.

Thursday, September 26, 2019

Patients don't think payers, providers can protect their data, survey finds

healthcare data analyticsPaige Minemyer
Fierce Healthcare
Originally published on August 26, 2019

Patients are skeptical of healthcare industry players’ ability to protect their data—and believe health insurers to be the worst at doing so, a new survey shows.

Harvard T.H. Chan School of Public Health and Politico surveyed 1,009 adults in mid-July and found that just 17% have a “great deal” of faith that their health plan will protect their data.

By contrast, 24% said they had a “great deal” of trust in their hospital to protect their data, and 34% said the same about their physician’s office. In addition, 22% of respondents said they had “not very much” trust in their insurer to protect their data, and 17% said they had no trust at all.

The firms that fared the worst on the survey, however, were online search engines and social media sites. Only 7% said they have a “great deal” of trust in search engines such as Google to protect their data, and only 3% said the same about social media platforms.

The info is here.

Friday, September 20, 2019

Why Moral Emotions Go Viral Online

Ana P. Gantman, William J. Brady, & Jay Van Bavel
Scientific American
Originally posted August 20, 2019

Social media is changing the character of our political conversations. As many have pointed out, our attention is a scarce resource that politicians and journalists are constantly fighting to attract, and the online world has become a primary trigger of our moral outrage. These two ideas, it turns out, are fundamentally related. According to our forthcoming paper, words that appeal to one’s sense of right and wrong are particularly effective at capturing attention, which may help explain this new political reality.

It occurred to us that the way people scroll through their social media feeds is very similar to a classic method psychologists use to measure people’s ability to pay attention. When we mindlessly browse social media, we are rapidly presenting a stream of verbal stimuli to ourselves. Psychologists have been studying this issue in the lab for decades, displaying to subjects a rapid succession of words, one after another, in the blink of an eye. In the lab, people are asked to find a target word among a collection of other words. Once they find it, there’s a short window of time in which that word captures their attention. If there’s a second target word in that window, most people don’t even see it—almost as if they had blinked with their eyes open.

There is an exception: if the second target word is emotionally significant to the viewer, that person will see it. Some words are so important to us that they are able to capture our attention even when we are already paying attention to something else.

The info is here.

Friday, August 30, 2019

The Technology of Kindness—How social media can rebuild our empathy—and why it must.

Jamil Zaki
Scientific American
Originally posted August 6, 2019

Here is an excerpt:

Technology also builds new communities around kindness. Consider the paradox of rare illnesses such as cystic fibrosis or myasthenia gravis. Each affects fewer than one in 1,000 people but there are many such conditions, meaning there are many people who suffer in ways their friends and neighbors don’t understand. Millions have turned to online forums, such as Facebook groups or the site RareConnect. In 2011 Priya Nambisan, a health policy expert, surveyed about 800 members of online health forums. Users reported that these groups offer helpful tips and information but also described them as heartfelt communities, full of compassion and commiseration.

allowing anyone to count on the kindness of strangers. These sites train users to provide empathetic social support and then unleash their goodwill on one another. Some express their struggles; others step in to provide support. Users find these platforms deeply soothing. In a 2015 survey, 7cups users described the kindness they received on the site to be as helpful as professional psychotherapy. Users on these sites also benefit from helping others. In a 2017 study, psychologist Bruce Doré and his colleagues assigned people to use either Koko or another Web site and tested their subsequent well-being. Koko users’ levels of depression dropped after spending time on the site, especially when they used it to support others.

The info is here.

Sunday, August 18, 2019

Social physics

Despite the vagaries of free will and circumstance, human behaviour in bulk is far more predictable than we like to imagine

Ian Stewart
www.aeon.co
Originally posted July 9, 2019

Here is an excerpt:

Polling organisations use a variety of methods to try to minimise these sources of error. Many of these methods are mathematical, but psychological and other factors also come into consideration. Most of us know of stories where polls have confidently indicated the wrong result, and it seems to be happening more often. Special factors are sometimes invoked to ‘explain’ why, such as a sudden late swing in opinion, or people deliberately lying to make the opposition think it’s going to win and become complacent. Nevertheless, when performed competently, polling has a fairly good track-record overall. It provides a useful tool for reducing uncertainty. Exit polls, where people are asked whom they voted for soon after they cast their vote, are often very accurate, giving the correct result long before the official vote count reveals it, and can’t influence the result.

Today, the term ‘social physics’ has acquired a less metaphorical meaning. Rapid progress in information technology has led to the ‘big data’ revolution, in which gigantic quantities of information can be obtained and processed. Patterns of human behaviour can be extracted from records of credit-card purchases, telephone calls and emails. Words suddenly becoming more common on social media, such as ‘demagogue’ during the 2016 US presidential election, can be clues to hot political issues.

The mathematical challenge is to find effective ways to extract meaningful patterns from masses of unstructured information, and many new methods.

The info is here.

Tuesday, July 23, 2019

How celebrity activists are changing morality in America

Image result for influencersCaroline Newman
phys.org
Originally posted July 1, 2019


Here is an excerpt:

Q. What are some of the risks of viewing celebrities as moral authorities?

A. Many argue that celebrities have no right to speak out on these issues because they do not have traditional credentials in law, religion or philosophy. I do not believe that. While there is a risk that people will pay less attention to those traditional leaders, religious or otherwise, that might not be such a bad thing, given some of the scandals we have seen recently.

Perhaps the biggest risk is that anyone with a Facebook or Twitter account can now get on their soapbox and start making moral proclamations, often with little to back that up. Morality has become more of a free-for-all and the responsibility for determining morality rests on the shoulders of everyday Americans, who might find it easier to listen to celebrities they already like rather than conducting research themselves. However, this is arguably what we have done all along, with priests, rabbis, political leaders, etc.

Q. What are some of the benefits?

A. One important benefit is that ethics becomes part of everyday life and everyday discussions. It used to be that people who studied philosophy or religion were kind of off to the side. Now that people like Taylor Swift, Oprah or Colin Kaepernick are talking about very important moral issues; those issues and debates have become mainstream and, if not cool, at least more frequently talked about.

The info is here.

Editor's Note: Ugh.

Thursday, July 4, 2019

Exposure to opposing views on social media can increase political polarization

Christopher Bail, Lisa Argyle, and others
PNAS September 11, 2018 115 (37) 9216-9221; first published August 28, 2018 https://doi.org/10.1073/pnas.1804840115

Abstract

There is mounting concern that social media sites contribute to political polarization by creating “echo chambers” that insulate people from opposing views about current events. We surveyed a large sample of Democrats and Republicans who visit Twitter at least three times each week about a range of social policy issues. One week later, we randomly assigned respondents to a treatment condition in which they were offered financial incentives to follow a Twitter bot for 1 month that exposed them to messages from those with opposing political ideologies (e.g., elected officials, opinion leaders, media organizations, and nonprofit groups). Respondents were resurveyed at the end of the month to measure the effect of this treatment, and at regular intervals throughout the study period to monitor treatment compliance. We find that Republicans who followed a liberal Twitter bot became substantially more conservative posttreatment. Democrats exhibited slight increases in liberal attitudes after following a conservative Twitter bot, although these effects are not statistically significant. Notwithstanding important limitations of our study, these findings have significant implications for the interdisciplinary literature on political polarization and the emerging field of computational social science.

The research is here.

Happy Fourth of July!!!

Wednesday, July 3, 2019

Rep. Matt Gaetz to be investigated by House Ethics for tweet apparently threatening Cohen

Emily Kopp
www.rollcall.com
Originally published June 28, 2019


Rep. Matt Gaetz faces an inquiry by the House Ethics Committee for a tweet that appeared to threaten President Donald Trump’s former lawyer Michael Cohen with blackmail.

The House Ethics Committee announced Friday it would establish an investigative subcommittee to review whether the Florida Republican, a staunch ally of the president, sought to intimidate Cohen before he testified before the House Oversight and Reform panel. The Ethics Committee had sought an interview with Gaetz, but he declined, triggering the investigation.

“If members of Congress want to spend their time psychoanalyzing my tweets, it’s certainly their prerogative,” Gaetz said in an emailed statement. “I won’t be joining them in the endeavor.”

Maryland Democrat Anthony G. Brown will serve as the chairman of the investigative subcommittee, while Mississippi RepublicanMichael Guest will be the ranking member. The panel will have the power to issue subpoenas in its pursuit of information, documents and interviews.

The info is here.