Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Social Media. Show all posts
Showing posts with label Social Media. Show all posts

Tuesday, October 10, 2023

The Moral Case for No Longer Engaging With Elon Musk’s X

David Lee
Bloomberg.com
Originally published 5 October 23

Here is an excerpt:

Social networks are molded by the incentives presented to users. In the same way we can encourage people to buy greener cars with subsidies or promote healthy living by giving out smartwatches, so, too, can levers be pulled to improve the health of online life. Online, people can’t be told what to post, but sites can try to nudge them toward behaving in a certain manner, whether through design choices or reward mechanisms.

Under the previous management, Twitter at least paid lip service to this. In 2020, it introduced a feature that encouraged people to actually read articles before retweeting them, for instance, to promote “informed discussion.” Jack Dorsey, the co-founder and former chief executive officer, claimed to be thinking deeply about improving the quality of conversations on the platform — seeking ways to better measure and improve good discourse online. Another experiment was hiding the “likes” count in an attempt to train away our brain’s yearn for the dopamine hit we get from social engagement.

One thing the prior Twitter management didn’t do is actively make things worse. When Musk introduced creator payments in July, he splashed rocket fuel over the darkest elements of the platform. These kinds of posts always existed, in no small number, but are now the despicable main event. There’s money to be made. X’s new incentive structure has turned the site into a hive of so-called engagement farming — posts designed with the sole intent to elicit literally any kind of response: laughter, sadness, fear. Or the best one: hate. Hate is what truly juices the numbers.

The user who shared the video of Carson’s attack wasn’t the only one to do it. But his track record on these kinds of posts, and the inflammatory language, primed it to be boosted by the algorithm. By Tuesday, the user was still at it, making jokes about Carson’s girlfriend. All content monetized by advertising, which X desperately needs. It’s no mistake, and the user’s no fringe figure. In July, he posted that the site had paid him more than $16,000. Musk interacts with him often.


Here's my take: 

Lee pointed out that social networks can shape user behavior through incentives, and the previous management of Twitter had made some efforts to promote healthier online interactions. However, under Elon Musk's management, the platform has taken a different direction, actively encouraging provocative and hateful content to boost engagement.

Lee criticized the new incentive structure on X, where users are financially rewarded for producing controversial content. They argued that as the competition for attention intensifies, the content will likely become more violent and divisive.

Lee also mentioned an incident involving former executive Yoel Roth, who raised concerns about hate speech on the platform, and Musk's dismissive response to those concerns.  Musk is not a business genius and does not understand how to promote a healthy social media site.

Friday, September 15, 2023

Older Americans are more vulnerable to prior exposure effects in news evaluation.

Lyons, B. A. (2023). 
Harvard Kennedy School Misinformation Review.

Outline

Older news users may be especially vulnerable to prior exposure effects, whereby news comes to be seen as more accurate over multiple viewings. I test this in re-analyses of three two-wave, nationally representative surveys in the United States (N = 8,730) in which respondents rated a series of mainstream, hyperpartisan, and false political headlines (139,082 observations). I find that prior exposure effects increase with age—being strongest for those in the oldest cohort (60+)—especially for false news. I discuss implications for the design of media literacy programs and policies regarding targeted political advertising aimed at this group.

Essay Summary
  • I used three two-wave, nationally representative surveys in the United States (N = 8,730) in which respondents rated a series of actual mainstream, hyperpartisan, or false political headlines. Respondents saw a sample of headlines in the first wave and all headlines in the second wave, allowing me to determine if prior exposure increases perceived accuracy differentially across age.  
  • I found that the effect of prior exposure to headlines on perceived accuracy increases with age. The effect increases linearly with age, with the strongest effect for those in the oldest age cohort (60+). These age differences were most pronounced for false news.
  • These findings suggest that repeated exposure can help account for the positive relationship between age and sharing false information online. However, the size of this effect also underscores that other factors (e.g., greater motivation to derogate the out-party) may play a larger role. 
The beginning of the Implications Section

Web-tracking and social media trace data paint a concerning portrait of older news users. Older American adults were much more likely to visit dubious news sites in 2016 and 2020 (Guess, Nyhan, et al., 2020; Moore et al., 2023), and were also more likely to be classified as false news “supersharers” on Twitter, a group who shares the vast majority of dubious news on the platform (Grinberg et al., 2019). Likewise, this age group shares about seven times more links to these domains on Facebook than younger news consumers (Guess et al., 2019; Guess et al., 2021). 

Interestingly, however, older adults appear to be no worse, if not better, at identifying false news stories than younger cohorts when asked in surveys (Brashier & Schacter, 2020). Why might older adults identify false news in surveys but fall for it “in the wild?” There are likely multiple factors at play, ranging from social changes across the lifespan (Brashier & Schacter, 2020) to changing orientations to politics (Lyons et al., 2023) to cognitive declines (e.g., in memory) (Brashier & Schacter, 2020). In this paper, I focus on one potential contributor. Specifically, I tested the notion that differential effects of prior exposure to false news helps account for the disjuncture between older Americans’ performance in survey tasks and their behavior in the wild.

A large body of literature has been dedicated to exploring the magnitude and potential boundary conditions of the illusory truth effect (Hassan & Barber, 2021; Henderson et al., 2021; Pillai & Fazio, 2021)—a phenomenon in which false statements or news headlines (De keersmaecker et al., 2020; Pennycook et al., 2018) come to be believed over multiple exposures. Might this effect increase with age? As detailed by Brashier and Schacter (2020), cognitive deficits are often blamed for older news users’ behaviors. This may be because cognitive abilities are strongest in young adulthood and slowly decline beyond that point (Salthouse, 2009), resulting in increasingly effortful cognition (Hess et al., 2016). As this process unfolds, older adults may be more likely to fall back on heuristics when judging the veracity of news items (Brashier & Marsh, 2020). Repetition, the source of the illusory truth effect, is one heuristic that may be relied upon in such a scenario. This is because repeated messages feel easier to process and thus are seen as truer than unfamiliar ones (Unkelbach et al., 2019).

Friday, August 4, 2023

Social Media and Morality

Van Bavel, J. J., Robertson, C. et al. (2023, June 6).

Abstract

Nearly five billion people around the world now use social media, and this number continues to grow. One of the primary goals of social media platforms is to capture and monetize human attention. One means by which individuals and groups can capture attention and drive engagement on these platforms is by sharing morally and emotionally evocative content. We review a growing body of research on the interrelationship of social media and morality–as well the consequences for individuals and society. Moral content often goes “viral” on social media, and social media makes moral behavior (such as punishment) less costly. Thus, social media often acts as an accelerant for existing moral dynamics – amplifying outrage, status seeking, and intergroup conflict, while also potentially amplifying more constructive facets of morality, such as social support, pro-sociality, and collective action. We discuss trends, heated debates, and future directions in this emerging literature.

From Discussions and Future Directions

Addressing the interplay between social media and morality 

There is a growing recognition among scholars and the public that social media has deleterious consequences for society and there is a growing appetite for greater transparency and some form of regulation of social media platforms (Rathje et al., 2023). To address the adverse consequences of social media, solutions at the system level are necessary (e.g., Chater & Loewenstein, 2022), but individual- or group-level solutions may be useful for creating behavioral change before system-level change is in place and for increasing public support for system-level solutions (Koppel et. al., 2023). In the following section, we discuss a range of solutions that address the adverse consequences of the interplay between social media and morality.

Regulation is one of the most heavily debated ways of mitigating the adverse features of social media. Regulating social media can be done both on platforms as well at the national or cross-national level, but always involves discussions about who should decide what should be allowed on which platforms (Kaye, 2019). Currently, there is relatively little editorial oversight with the content even on mainstream platforms, yet the connotations with censorship makes regulation inherently controversial. For instance, Americans believe that social media companies censor political viewpoints (Vogels et al., 2020) and believe it is hard to regulate social media because people cannot agree upon what should and should not be removed (PewResearch Center, 2019). Moreover, authoritarian states can suppress dissent through the regulation of speech on social media.

In general, people on the political left are supportive of regulating social media platforms (Kozyreva, 2023; Rasmussen, 2022), reflecting liberals’ general tendency to more supportive, and conservatives' tendency to more opposing, of regulatory policies (e.g. Grossman, 2015). In the context of content on social media, one explanation is that left-leaning people infer more harm from aggressive behaviors. In other words, they may perceive immoral behaviors on social media as more harmful for the victim, which in turn justifies regulation (Graham 2009; Crawford 2017; Walter 2019; Boch 2020). There are conflicting results, however, on whether people oppose regulating hate speech (Bilewicz et. al. 2017; Rasmussen 2023a) because they use hate to derogate minority and oppressed groups (Sidanius, Pratto, and Bobo 1996; Federico and Sidanius, 2002) or because of principled political preferences deriving from conservatism values (Grossman 2016; Grossman 2015; Sniderman & Carmines, 1997; Sniderman & Piazza, 1993; Sniderman, Piazza, Tetlock, & Kendrick, 1991). While sensitivity to harm contributes to making people on the political left more supportive of regulating social media, it is contested whether opposition from the political right derives from group-based dominance or principled opposition.

Click the link above to get to the research.

Here is a summary from me:
  • Social media can influence our moral judgments. Studies have shown that people are more likely to make moral judgments that align with the views of their social media friends and the content they consume on social media. For example, one study found that people who were exposed to pro-environmental content on social media were more likely to make moral judgments that favored environmental protection.
  • Social media can lead to moral disengagement. Moral disengagement is a psychological process that allows people to justify harmful or unethical behavior. Studies have shown that social media can contribute to moral disengagement by making it easier for people to distance themselves from the consequences of their actions. For example, one study found that people who were exposed to violent content on social media were more likely to engage in moral disengagement.
  • Social media can promote prosocial behavior. Prosocial behavior is behavior that is helpful or beneficial to others. Studies have shown that social media can promote prosocial behavior by connecting people with others who share their values and by providing opportunities for people to help others. For example, one study found that people who used social media to connect with others were more likely to volunteer their time to help others.
  • Social media can be used to spread misinformation and hate speech. Misinformation is false or misleading information that is spread intentionally or unintentionally. Hate speech is speech that attacks a person or group on the basis of attributes such as race, religion, or sexual orientation. Social media platforms have been used to spread misinformation and hate speech, which can have a negative impact on society.
Overall, the research on social media and morality suggests that social media can have both positive and negative effects on our moral judgments and behavior. It is important to be aware of the potential risks and benefits of social media and to use it in a way that promotes positive moral values.

Tuesday, August 1, 2023

When Did Medicine Become a Battleground for Everything?

Tara Haelle
Medscape.com
Originally posted 18 July 23

Like hundreds of other medical experts, Leana Wen, MD, an emergency physician and former Baltimore health commissioner, was an early and avid supporter of COVID vaccines and their ability to prevent severe disease, hospitalization, and death from SARS-CoV-2 infections.

When 51-year-old Scott Eli Harris, of Aubrey, Texas, heard of Wen's stance in July 2021, the self-described "5th generation US Army veteran and a sniper" sent Wen an electronic invective laden with racist language and very specific threats to shoot her.

Harris pled guilty to transmitting threats via interstate commerce last February and began serving 6 months in federal prison last fall, but his threats wouldn't be the last for Wen. Just 2 days after Harris was sentenced, charges were unsealed against another man in Massachusetts, who threatened that Wen would "end up in pieces" if she continued "pushing" her thoughts publicly.'

Wen has plenty of company. In an August 2022 survey of emergency doctors conducted by the American College of Emergency Physicians, 85% of respondents said violence against them is increasing. One in four doctors said they're being assaulted by patients and their family and friends multiple times a week, compared to just 8% of doctors who said as much in 2018. Sixty-four percent of emergency physicians reported receiving verbal assaults and threats of violence; 40% reported being hit or slapped, and 26% were kicked.

This uptick of violence and threats against physicians didn't come out of nowhere; violence against healthcare workers has been gradually increasing over the past decade. Healthcare providers can attest to the hostility that particular topics have sparked for years: vaccines in pediatrics, abortion in ob-gyn, and gender-affirming care in endocrinology.

But the pandemic fueled the fire. While there have always been hot-button issues in medicine, the ire they arouse today is more intense than ever before. The proliferation of misinformation (often via social media) and the politicization of public health and medicine are at the center of the problem.

"The People Attacking Are Themselves Victims'

The misinformation problem first came to a head in one area of public health: vaccines. The pandemic accelerated antagonism in medicine ― thanks, in part, to decades of anti- antivaccine activism.

The anti-vaccine movement, which has ebbed and flowed in the US and across the globe since the first vaccine, experienced a new wave in the early 2000s with the combination of concerns about thimerosal in vaccines and a now disproven link between autism and the MMR vaccine. But that movement grew. It picked up steam when activists gained political clout after a 2014 measles outbreak at Disneyland led California schools to tighten up policies regarding vaccinations for kids who enrolled. These stronger public school vaccination laws ran up against religious freedom arguments from anti-vaccine advocates.

Wednesday, July 19, 2023

Accuracy and social motivations shape judgements of (mis)information

Rathje, S., Roozenbeek, J., Van Bavel, J.J. et al.
Nat Hum Behav 7, 892–903 (2023).

Abstract

The extent to which belief in (mis)information reflects lack of knowledge versus a lack of motivation to be accurate is unclear. Here, across four experiments (n = 3,364), we motivated US participants to be accurate by providing financial incentives for correct responses about the veracity of true and false political news headlines. Financial incentives improved accuracy and reduced partisan bias in judgements of headlines by about 30%, primarily by increasing the perceived accuracy of true news from the opposing party (d = 0.47). Incentivizing people to identify news that would be liked by their political allies, however, decreased accuracy. Replicating prior work, conservatives were less accurate at discerning true from false headlines than liberals, yet incentives closed the gap in accuracy between conservatives and liberals by 52%. A non-financial accuracy motivation intervention was also effective, suggesting that motivation-based interventions are scalable. Altogether, these results suggest that a substantial portion of people’s judgements of the accuracy of news reflects motivational factors.

Conclusions

There is a sizeable partisan divide in the kind of news liberals and conservatives believe in, and conservatives tend to believe in and share more false news than liberals. Our research suggests these differences are not immutable. Motivating people to be accurate improves accuracy about the veracity of true (but not false) news headlines, reduces partisan bias and closes a substantial portion of the gap in accuracy between liberals and conservatives. Theoretically, these results identify accuracy and social motivations as key factors in driving news belief and sharing. Practically, these results suggest that shifting motivations may be a useful strategy for creating a shared reality across the political spectrum.

Key findings
  • Accuracy motivations: Participants who were motivated to be accurate were more likely to correctly identify true and false news headlines.
  • Social motivations: Participants who were motivated to identify news that would be liked by their political allies were less likely to correctly identify true and false news headlines.
  • Combination of motivations: Participants who were motivated by both accuracy and social motivations were more likely to correctly identify true news headlines from the opposing political party.

Thursday, June 15, 2023

Moralization and extremism robustly amplify myside sharing

Marie, A, Altay, S., et al.
PNAS Nexus, Volume 2, Issue 4, April 2023.

Abstract

We explored whether moralization and attitude extremity may amplify a preference to share politically congruent (“myside”) partisan news and what types of targeted interventions may reduce this tendency. Across 12 online experiments (N = 6,989), we examined decisions to share news touching on the divisive issues of gun control, abortion, gender and racial equality, and immigration. Myside sharing was systematically observed and was consistently amplified when participants (i) moralized and (ii) were attitudinally extreme on the issue. The amplification of myside sharing by moralization also frequently occurred above and beyond that of attitude extremity. These effects generalized to both true and fake partisan news. We then examined a number of interventions meant to curb myside sharing by manipulating (i) the audience to which people imagined sharing partisan news (political friends vs. foes), (ii) the anonymity of the account used (anonymous vs. personal), (iii) a message warning against the myside bias, and (iv) a message warning against the reputational costs of sharing “mysided” fake news coupled with an interactive rating task. While some of those manipulations slightly decreased sharing in general and/or the size of myside sharing, the amplification of myside sharing by moral attitudes was consistently robust to these interventions. Our findings regarding the robust exaggeration of selective communication by morality and extremism offer important insights into belief polarization and the spread of partisan and false information online.

General discussion

Across 12 experiments (N = 6,989), we explored US participants’ intentions to share true and fake partisan news on 5 controversial issues—gun control, abortion, racial equality, sex equality, and immigration—in social media contexts. Our experiments consistently show that people have a strong sharing preference for politically congruent news—Democrats even more so than Republicans. They also demonstrate that this “myside” sharing is magnified when respondents see the issue as being of “absolute moral importance”, and when they have an extreme attitude on the issue. Moreover, issue moralization was found to amplify myside sharing above and beyond attitude extremity in the majority of the studies. Expanding prior research on selective communication, our work provides a clear demonstration that citizens’ myside communicational preference is powerfully amplified by their moral and political ideology (18, 19, 39–43).

By examining this phenomenon across multiple experiments varying numerous parameters, we demonstrated the robustness of myside sharing and of its amplification by participants’ issue moralization and attitude extremity. First, those effects were consistently observed on both true (Experiments 1, 2, 3, 5a, 6a, 7, and 10) and fake (Experiments 4, 5b, 6b, 8, 9, and 10) news stories and across distinct operationalizations of our outcome variable. Moreover, myside sharing and its amplification by issue moralization and attitude extremity were systematically observed despite multiple manipulations of the sharing context. Namely, those effects were observed whether sharing was done from one's personal or an anonymous social media account (Experiments 5a and 5b), whether the audience was made of political friends or foes (Experiments 6a and 6b), and whether participants first saw intervention messages warning against the myside bias (Experiments 7 and 8), or an interactive intervention warning against the reputational costs of sharing mysided falsehoods (Experiments 9 and 10).

Saturday, March 11, 2023

Censoring political opposition online: Who does it and why

Ashokkumar, A., Talaifar, S.,  et al. (2020).
Journal of Experimental Social Psychology, 91

Abstract

As ordinary citizens increasingly moderate online forums, blogs, and their own social media feeds, a new type of censoring has emerged wherein people selectively remove opposing political viewpoints from online contexts. In three studies of behavior on putative online forums, supporters of a political cause (e.g., abortion or gun rights) preferentially censored comments that opposed their cause. The tendency to selectively censor cause-incongruent online content was amplified among people whose cause-related beliefs were deeply rooted in or “fused with” their identities. Moreover, six additional identity-related measures also amplified the selective censoring effect. Finally, selective censoring emerged even when opposing comments were inoffensive and courteous. We suggest that because online censorship enacted by moderators can skew online content consumed by millions of users, it can systematically disrupt democratic dialogue and subvert social harmony.

Highlights

• We use a novel experimental paradigm to study censorship in online environments.

• People selectively censor online content that challenges their political beliefs.

• People block online authors of posts they disagree with.

• When beliefs are rooted in identity, selective censoring is amplified.

• Selective censoring occurred even for comments without offensive language.

Conclusion

Contemporary pundits often blame the apparent increase in polarization on “the internet” or “social media.” Researchers have found some basis for such assertions by demonstrating that internet users are indeed selectively exposed to evidence that would lend support to their views. Our findings move beyond this literature by demonstrating that moderators employ censorship to not only bring online content into harmony with their values, but to actively advance their causes and attack opponents of their causes. From this vantage point, those whose political beliefs are rooted in their identities are not passive participants in online polarization; rather, they are agentic actors who actively curate online environments by censoring content that challenges their ideological positions. By providing a window into the psychological processes underlying these processes, our research may open up a broader vista of related processes for systematic study.

Wednesday, February 15, 2023

Moralized language predicts hate speech on social media

Kirill Solovev and Nicolas Pröllochs
PNAS Nexus, Volume 2, Issue 1, 
January 2023

Abstract

Hate speech on social media threatens the mental health of its victims and poses severe safety risks to modern societies. Yet, the mechanisms underlying its proliferation, though critical, have remained largely unresolved. In this work, we hypothesize that moralized language predicts the proliferation of hate speech on social media. To test this hypothesis, we collected three datasets consisting of N = 691,234 social media posts and ∼35.5 million corresponding replies from Twitter that have been authored by societal leaders across three domains (politics, news media, and activism). Subsequently, we used textual analysis and machine learning to analyze whether moralized language carried in source tweets is linked to differences in the prevalence of hate speech in the corresponding replies. Across all three datasets, we consistently observed that higher frequencies of moral and moral-emotional words predict a higher likelihood of receiving hate speech. On average, each additional moral word was associated with between 10.76% and 16.48% higher odds of receiving hate speech. Likewise, each additional moral-emotional word increased the odds of receiving hate speech by between 9.35 and 20.63%. Furthermore, moralized language was a robust out-of-sample predictor of hate speech. These results shed new light on the antecedents of hate speech and may help to inform measures to curb its spread on social media.

Significance Statement

This study provides large-scale observational evidence that moralized language fosters the proliferation of hate speech on social media. Specifically, we analyzed three datasets from Twitter covering three domains (politics, news media, and activism) and found that the presence of moralized language in source posts was a robust and meaningful predictor of hate speech in the corresponding replies. These findings offer new insights into the mechanisms underlying the proliferation of hate speech on social media and may help to inform educational applications, counterspeech strategies, and automated methods for hate speech detection.

Discussion

This study provides observational evidence that moralized language in social media posts is associated with more hate speech in the corresponding replies. We uncovered this link for posts from a diverse set of societal leaders across three domains (politics, news media, and activism). On average, each additional moral word was associated with between 10.76 and 16.48% higher odds of receiving hate speech. Likewise, each additional moral-emotional word increased the odds of receiving hate speech by between 9.35 and 20.63%. Across the three domains, the effect sizes were most pronounced for activists. A possible reason is that the activists in our data were affiliated with politically left-leaning subjects (climate, animal rights, and LGBTQIA+) that may have been particularly likely to trigger hate speech from right-wing groups. In contrast, our data for politicians and newspeople were fairly balanced and encompassed users from both sides of the political spectrum. Overall, the comparatively large effect sizes underscore the salient role of moralized language on social media. While earlier research has demonstrated that moralized language is associated with greater virality, our work implies that it fosters the proliferation of hate speech.

Monday, January 9, 2023

The Psychology of Online Political Hostility: A Comprehensive, Cross-National Test of the Mismatch Hypothesis

Bor, A., & Petersen, M. (2022).
American Political Science Review, 
116(1), 1-18.
doi:10.1017/S0003055421000885

Abstract

Why are online discussions about politics more hostile than offline discussions? A popular answer argues that human psychology is tailored for face-to-face interaction and people’s behavior therefore changes for the worse in impersonal online discussions. We provide a theoretical formalization and empirical test of this explanation: the mismatch hypothesis. We argue that mismatches between human psychology and novel features of online environments could (a) change people’s behavior, (b) create adverse selection effects, and (c) bias people’s perceptions. Across eight studies, leveraging cross-national surveys and behavioral experiments (total N = 8,434), we test the mismatch hypothesis but only find evidence for limited selection effects. Instead, hostile political discussions are the result of status-driven individuals who are drawn to politics and are equally hostile both online and offline. Finally, we offer initial evidence that online discussions feel more hostile, in part, because the behavior of such individuals is more visible online than offline.

From Conclusions and General Discussion

In this manuscript, we documented that online political discussions seem more hostile than offline discussions and investigated the reasons why such hostility gap exists. In particular, we provided a comprehensive test of the mismatch hypothesis positing that the hostility gap reflects psychological changes induced by mismatches between the features of online environments and human psychology. Overall, however, we found little evidence that mismatch-induced processes underlie the hostility gap. We found that people are not more hostile online than offline; that hostile individuals do not preferentially select into online (vs. offline) political discussions; and that people do not over-perceive hostility in online messages. We did find some evidence for another selection effect: Non-hostile individuals select out from all, hostile as well as non-hostile, online political discussions. Thus, despite the use of study designs with high power, the present data do not support the claim that online environments produce radical psychological changes in people.

Our ambition with the present endeavor was to initiate research on online political hostility, as more and more political interactions occur online. To this end, we took a sweeping approach, built an overarching framework for understanding online political hostility and provided a range of initial tests. Our work highlights important fruitful avenues for future research. First, future studies should assess whether mismatches could propel hostility on specific environments, platforms or situations, even if these mismatches do not generate hostility in all online environments. Second, all our studies were conducted online and, hence, it is key for future research to assess the mismatch hypothesis using behavioral data from offline discussions. Contrasting online versus offline communications directly in a laboratory setting could yield important new insights on the similarities and differences between these environments. Third, there is mounting evidence that, at least in the USA, online discussions are sometimes hijacked by provocateurs such as employees of Russia’s infamous Internet Research Agency. While recent research implies that the amount of content generated by these actors is trivial compared to the volume of social media discussions (Bail et al. 2020), the activities of such actors may nonetheless contribute to instilling hostility online, even among people not predisposed to be hostile offline.

Thursday, December 15, 2022

Dozens of telehealth startups sent sensitive health information to big tech companies

Katie Palmer with
Todd Feathers & Simon Fondrie-Teitler 
STAT NEWS
Originally posted 13 DEC 22

Here is an excerpt:

Health privacy experts and former regulators said sharing such sensitive medical information with the world’s largest advertising platforms threatens patient privacy and trust and could run afoul of unfair business practices laws. They also emphasized that privacy regulations like the Health Insurance Portability and Accountability Act (HIPAA) were not built for telehealth. That leaves “ethical and moral gray areas” that allow for the legal sharing of health-related data, said Andrew Mahler, a former investigator at the U.S. Department of Health and Human Services’ Office for Civil Rights.

“I thought I was at this point hard to shock,” said Ari Friedman, an emergency medicine physician at the University of Pennsylvania who researches digital health privacy. “And I find this particularly shocking.”

In October and November, STAT and The Markup signed up for accounts and completed onboarding forms on 50 telehealth sites using a fictional identity with dummy email and social media accounts. To determine what data was being shared by the telehealth sites as users completed their forms, reporters examined the network traffic between trackers using Chrome DevTools, a tool built into Google’s Chrome browser.

On Workit’s site, for example, STAT and The Markup found that a piece of code Meta calls a pixel sent responses about self-harm, drug and alcohol use, and personal information — including first name, email address, and phone number — to Facebook.

The investigation found trackers collecting information on websites that sell everything from addiction treatments and antidepressants to pills for weight loss and migraines. Despite efforts to trace the data using the tech companies’ own transparency tools, STAT and The Markup couldn’t independently confirm how or whether Meta and the other tech companies used the data they collected.

After STAT and The Markup shared detailed findings with all 50 companies, Workit said it had changed its use of trackers. When reporters tested the website again on Dec. 7, they found no evidence of tech platform trackers during the company’s intake or checkout process.

“Workit Health takes the privacy of our members seriously,” Kali Lux, a spokesperson for the company, wrote in an email. “Out of an abundance of caution, we elected to adjust the usage of a number of pixels for now as we continue to evaluate the issue.”

Tuesday, November 1, 2022

LinkedIn ran undisclosed social experiments on 20 million users for years to study job success

Kathleen Wong
USAToday.com
Originally posted 25 SEPT 22

A new study analyzing the data of over 20 million LinkedIn users over the timespan of five years reveals that our acquaintances may be more helpful in finding a new job than close friends.

Researchers behind the study say the findings will improve job mobility on the platform, but since users were unaware of their data being studied, some may find the lack of transparency concerning.  

Published this month in Science, the study was conducted by researchers from LinkedIn, Harvard Business School and the Massachusetts Institute of Technology between 2015 and 2019. Researchers ran "multiple large-scale randomized experiments" on the platform's "People You May Know" algorithm, which suggests new connections to users. 

In a practice known as A/B testing, the experiments included giving certain users an algorithm that offered different (like close or not-so-close) contact recommendations and then analyzing the new jobs that came out of those two billion new connections.

(cut)

A question of ethics

Privacy advocates told the New York Times Sunday that some of the 20 million LinkedIn users may not be happy  that their data was used without consent. That resistance is part of a longstanding  pattern of people's data being tracked and used by tech companies without their knowledge.

LinkedIn told the paper it "acted consistently" with its user agreement, privacy policy and member settings.

LinkedIn did not respond to an email sent by USA TODAY on Sunday. 

The paper reports that LinkedIn's privacy policy does state the company reserves the right to use its users' personal data.

That access can be used "to conduct research and development for our Services in order to provide you and others with a better, more intuitive and personalized experience, drive membership growth and engagement on our Services, and help connect professionals to each other and to economic opportunity." 

It can also be deployed to research trends.

The company also said it used "noninvasive" techniques for the study's research. 

Aral told USA TODAY that researchers "received no private or personally identifying data during the study and only made aggregate data available for replication purposes to ensure further privacy safeguards."

Thursday, October 6, 2022

Defining Their Own Ethics, Online Creators Are De Facto Therapists for Millions—Explosive Demand & Few Safeguards

Tantum Hunter
The Washington Post
Originally posted 29 AUG 22

Here are two excerpts:

In real life, mental health information and care are sparse. In the United States, 1 in 3 counties do not have a single licensed psychologist, according to the American Psychological Association, and Americans say cost is a top barrier to seeking mental health help. On the internet, however, mental health tips are everywhere: TikTok videos with #mentalhealth in the caption have earned more than 43.9 billion views, according to the analytics company Sprout Social, and mentions of mental health on social media are increasing year by year.

The growing popularity of the subject means that creators of mental health content are filling a health-care gap. But social media apps are not designed to prioritize accurate, helpful information, critics say, just whatever content draws the biggest reaction. Young people could see their deepest struggles become fodder for advertisers and self-promoters. With no road map even for licensed professionals, mental health creators are defining their own ethics.

“I don’t want to give anyone the wrong advice,” Moloney says. “I’ve met some [followers] who’ve just started crying and saying ‘thank you’ and stuff like that. Even though it seems small, to someone else, it can have a really big impact.”

As rates of depression and anxiety spiked during the pandemic and options for accessible care dwindled, creators shared an array of content including first-person accounts of life with mental illness and videos listing symptoms of bipolar disorder. In many cases, their follower counts ballooned.

(cut)

Ideally, social media apps should be one item in a collection of mental health resources, said Jodi Miller, a researcher at Johns Hopkins University School of Education who studies the relationships among young people, technology and stress.

“Young people need evidence-based sources of information outside the internet, from parents and schools,” Miller said.

Often, those resources are unavailable. So it’s up to consumers to decide what mental health advice they put stock in, Fisher-Quann said. For her, condescending health-care providers and the warped incentives of social media platforms haven’t made that easy. But she thinks she can get better — and that her followers can, too.

“It all has to come from a place of self-awareness and desire to get better. Communities can be extremely helpful for that, but they can also be extremely harmful for that,” she said.

Friday, March 4, 2022

Social media really is making us more morally outraged

Charlotte Hu
Popular Science
updated 13 AUG 21

Here is an excerpt:

The most interesting finding for the team was that some of the more politically moderate people tended to be the ones who are influenced by social feedback the most. “What we know about social media now is that a lot of the political content we see is actually produced by a minority of users—the more extreme users,” Brady says. 

One question that’s come out of this study is: what are the conditions under which moderate users either become more socially influenced to conform to a more extreme tone, as opposed to just get turned off by it and leave the platform, or don’t engage any more? “I think both of these potential directions are important because they both imply that the average tone of conversation on the platform will get increasingly extreme.”

Social media can exploit base human psychology

Moral outrage is a natural tendency. “It’s very deeply ingrained in humans, it happens online, offline, everyone, but there is a sense that the design of social media can amplify in certain contexts this natural tendency we have,” Brady says. But moral outrage is not always bad. It can have important functions, and therefore, “it’s not a clear-cut answer that we want to reduce moral outrage.”

“There’s a lot of data now that suggest that negative content does tend to draw in more engagement on the average than positive content,” says Brady. “That being said, there are lots of contexts where positive content does draw engagement. So it’s definitely not a universal law.” 

It’s likely that multiple factors are fueling this trend. People could be attracted to posts that are more popular or go viral on social media, and past studies have shown that we want to know what the gossip is and what people are doing wrong. But the more people engage with these types of posts, the more platforms push them to us. 

Jonathan Nagler, a co-director of NYU Center for Social Media and Politics, who was not involved in the study, says it’s not shocking that moral outrage gets rewarded and amplified on social media. 

Monday, January 31, 2022

The future of work: freedom, justice and capital in the age of artificial intelligence

F. S. de Sio, T. Almeida & J. van den Hoven
(2021) Critical Review of International Social
 and Political Philosophy
DOI: 10.1080/13698230.2021.2008204

Abstract

Artificial Intelligence (AI) is predicted to have a deep impact on the future of work and employment. The paper outlines a normative framework to understand and protect human freedom and justice in this transition. The proposed framework is based on four main ideas: going beyond the idea of a Basic Income to compensate the losers in the transition towards AI-driven work, towards a Responsible Innovation approach, in which the development of AI technologies is governed by an inclusive and deliberate societal judgment; going beyond a philosophical conceptualisation of social justice only focused on the distribution of ‘primary goods’, towards one focused on the different goals, values, and virtues of various social practices (Walzer’s ‘spheres of justice’) and the different individual capabilities of persons (Sen’s ‘capabilities’); going beyond a classical understanding of capital, towards one explicitly including mental capacities as a source of value for AI-driven activities. In an effort to promote an interdisciplinary approach, the paper combines political and economic theories of freedom, justice and capital with recent approaches in applied ethics of technology, and starts applying its normative framework to some concrete example of AI-based systems: healthcare robotics, ‘citizen science’, social media and platform economy.

From the Conclusion

Whether or not it will create a net job loss (aka technological unemployment), Artificial Intelligence and digital technologies will change the nature of work, and will have a deep impact on people’s work lives. New political action is needed to govern this transition. In this paper we have claimed that also new philosophical concepts are needed, if the transition has to be governed responsibly and in the interest of everybody. The paper has outlined a general normative framework to make sense of- and address the issue of human freedom and justice in the age of AI at work. The framework is based on four ideas. First, in general freedom and justice cannot be achieved by only protecting existing jobs as a goal in itself, inviting persons to find ways for to remain relevant in a new machine-driven word, or offering financial compensation to those who are (permanently) left unemployed, for instance, via a Universal Basic Income. We should rather prevent technological unemployment and the worsening of working condition to happen, as a result of a Responsible Innovation approach to technology, where freedom and justice are built into the technical and institutional structures of the work of the future. Second, more in particular, we have argued, freedom and justice may be best promoted by a politics and an economics of technology informed by the recognition of different virtues and values as constitutive of different activities, following a Walzerian (‘spheres of justice’) approach to technological and institutional design, possibly integrated by a virtue ethics component. 

Monday, November 1, 2021

Social Media and Mental Health

Luca Braghieri, Ro’ee Levy, and Alexey Makarin
Independent Research
August 21

Abstract 

The diffusion of social media coincided with a worsening of mental health conditions among adolescents and young adults in the United States, giving rise to speculation that social media might be detrimental to mental health. In this paper, we provide the first quasi-experimental estimates of the impact of social media on mental health by leveraging a unique natural experiment: the staggered introduction of Facebook across U.S. colleges. Our analysis couples data on student mental health around the years of Facebook’s expansion with a generalized difference-in-differences empirical strategy. We find that the roll-out of Facebook at a college increased symptoms of poor mental health, especially depression, and led to increased utilization of mental healthcare services. We also find that, according to the students’ reports, the decline in mental health translated into worse academic performance. Additional evidence on mechanisms suggests the results are due to Facebook fostering unfavorable social comparisons. 

Discussion 

Implications for social media today 

Our estimates of the effects of social media on mental health rely on quasi-experimental variation in Facebook access among college students around the years 2004 to 2006. Such population and time window are directly relevant to the discussion about the severe worsening of mental health conditions among adolescents and young adults over the last two decades. In this section, we elaborate on the extent to which our findings have the potential to inform our understanding of the effects of social media on mental health today. 

Over the last two decades, Facebook underwent a host of important changes. Such changes include: i) the introduction of a personalized feed where posts are ranked by an algorithm; ii) the growth of Facebook’s user base from U.S. college students to almost three billion active users around the globe (Facebook, 2021); iii) video often replacing images and text; iv) increased usage of Facebook on mobile phones instead of computers; and v) the introduction of Facebook pages for brands, businesses, and organizations. 

The nature of the variation we are exploiting in this paper does not allow us to identify the impact of these features of social media. For example, the introduction of pages, along with other changes, made news consumption on Facebook more common over the last decade than it was at inception. Our estimates cannot shed light on whether the increased reliance on Facebook for news consumption has exacerbated or mitigated the effects of Facebook on mental health. 

Despite these caveats, we believe the estimates presented in this paper are still highly relevant today for two main reasons: first, the mechanisms whereby social media use might affect mental health arguably relate to core features of social media platforms that have been present since inception and that remain integral parts of those platforms today; second, the technological changes undergone by Facebook and related platforms might have amplified rather than mitigated the effect of those mechanisms. 

Saturday, October 16, 2021

Social identity shapes antecedents and functional outcomes of moral emotion expression in online networks

Brady, W. J., & Van Bavel, J. J. 
(2021, April 2). 

Abstract

As social interactions increasingly occur through social media platforms, intergroup affective phenomena such as “outrage firestorms” and “cancel culture” have emerged with notable consequences for society. In this research, we examine how social identity shapes the antecedents and functional outcomes of moral emotion expression online. Across four pre-registered experiments (N = 1,712), we find robust evidence that the inclusion of moral-emotional expressions in political messages has a causal influence on intentions to share the messages on social media. We find that individual differences in the strength of partisan identification is a consistent predictor of sharing messages with moral-emotional expressions, but little evidence that brief manipulations of identity salience increased sharing. Negative moral emotion expression in social media messages also causes the message author to be perceived as more strongly identified among their partisan ingroup, but less open-minded and less worthy of conversation to outgroup members. These experiments highlight the role of social identity in affective phenomena in the digital age, and showcase how moral emotion expressions in online networks can serve ingroup reputation functions while at the same time hinder discourse between political groups.

Conclusion

In the context of contentious political conversations online, moral-emotional language causes political partisans to share the message more often, and that this effect was strongest in strong group identifiers. Expressing negative moral-emotional language in social media messages makes the message author appear more strongly identified with their group, but also makes outgroup members think the author is less open-minded and less worth of conversation. This work sheds light on antecedents and functional outcomes of moral-emotion expression in the digital age, which is becoming increasingly important to study as intergroup affective phenomena such as viral outrage and affective polarization are reaching historic levels.

Tuesday, October 12, 2021

Demand five precepts to aid social-media watchdogs

Ethan Zucker
Nature 597, 9 (2021)
Originally punished 31 Aug 21

Here is an excerpt:

I propose the following. First, give researchers access to the same targeting tools that platforms offer to advertisers and commercial partners. Second, for publicly viewable content, allow researchers to combine and share data sets by supplying keys to application programming interfaces. Third, explicitly allow users to donate data about their online behaviour for research, and make code used for such studies publicly reviewable for security flaws. Fourth, create safe-haven protections that recognize the public interest. Fifth, mandate regular audits of algorithms that moderate content and serve ads.

In the United States, the FTC could demand this access on behalf of consumers: it has broad powers to compel the release of data. In Europe, making such demands should be even more straightforward. The European Data Governance Act, proposed in November 2020, advances the concept of “data altruism” that allows users to donate their data, and the broader Digital Services Act includes a potential framework to implement protections for research in the public interest.

Technology companies argue that they must restrict data access because of the potential for harm, which also conveniently insulates them from criticism and scrutiny. They cite misuse of data, such as in the Cambridge Analytica scandal (which came to light in 2018 and prompted the FTC orders), in which an academic researcher took data from tens of millions of Facebook users collected through online ‘personality tests’ and gave it to a UK political consultancy that worked on behalf of Donald Trump and the Brexit campaign. Another example of abuse of data is the case of Clearview AI, which used scraping to produce a huge photographic database to allow federal and state law-enforcement agencies to identify individuals.

These incidents have led tech companies to design systems to prevent misuse — but such systems also prevent research necessary for oversight and scrutiny. To ensure that platforms act fairly and benefit society, there must be ways to protect user data and allow independent oversight.

Friday, August 27, 2021

It’s hard to be a moral person. Technology is making it harder.

Sigal Samuel
vox.com
Originally posted 3 Aug 21

Here is an excerpt:

People who point out the dangers of digital tech are often met with a couple of common critiques. The first one goes like this: It’s not the tech companies’ fault. It’s users’ responsibility to manage their own intake. We need to stop being so paternalistic!

This would be a fair critique if there were symmetrical power between users and tech companies. But as the documentary The Social Dilemma illustrates, the companies understand us better than we understand them — or ourselves. They’ve got supercomputers testing precisely which colors, sounds, and other design elements are best at exploiting our psychological weaknesses (many of which we’re not even conscious of) in the name of holding our attention. Compared to their artificial intelligence, we’re all children, Harris says in the documentary. And children need protection.

Another critique suggests: Technology may have caused some problems — but it can also fix them. Why don’t we build tech that enhances moral attention?

“Thus far, much of the intervention in the digital sphere to enhance that has not worked out so well,” says Tenzin Priyadarshi, the director of the Dalai Lama Center for Ethics and Transformative Values at MIT.

It’s not for lack of trying. Priyadarshi and designers affiliated with the center have tried creating an app, 20 Day Stranger, that gives continuous updates on what another person is doing and feeling. You get to know where they are, but never find out who they are. The idea is that this anonymous yet intimate connection might make you more curious or empathetic toward the strangers you pass every day.

They also designed an app called Mitra. Inspired by Buddhist notions of a “virtuous friend” (kalyāṇa-mitra), it prompts you to identify your core values and track how much you acted in line with them each day. The goal is to heighten your self-awareness, transforming your mind into “a better friend and ally.”

I tried out this app, choosing family, kindness, and creativity as the three values I wanted to track. For a few days, it worked great. Being primed with a reminder that I value family gave me the extra nudge I needed to call my grandmother more often. But despite my initial excitement, I soon forgot all about the app.

Saturday, July 31, 2021

Stewardship of global collective behavior

Bak-Colman, J.B., et al.
Proceedings of the National Academy of Sciences 
Jul 2021, 118 (27) e2025764118
DOI: 10.1073/pnas.2025764118

Abstract

Collective behavior provides a framework for understanding how the actions and properties of groups emerge from the way individuals generate and share information. In humans, information flows were initially shaped by natural selection yet are increasingly structured by emerging communication technologies. Our larger, more complex social networks now transfer high-fidelity information over vast distances at low cost. The digital age and the rise of social media have accelerated changes to our social systems, with poorly understood functional consequences. This gap in our knowledge represents a principal challenge to scientific progress, democracy, and actions to address global crises. We argue that the study of collective behavior must rise to a “crisis discipline” just as medicine, conservation, and climate science have, with a focus on providing actionable insight to policymakers and regulators for the stewardship of social systems.

Summary

Human collective dynamics are critical to the wellbeing of people and ecosystems in the present and will set the stage for how we face global challenges with impacts that will last centuries. There is no reason to suppose natural selection will have endowed us with dynamics that are intrinsically conducive to human wellbeing or sustainability. The same is true of communication technology, which has largely been developed to solve the needs of individuals or single organizations. Such technology, combined with human population growth, has created a global social network that is larger, denser, and able to transmit higher-fidelity information at greater speed. With the rise of the digital age, this social network is increasingly coupled to algorithms that create unprecedented feedback effects.

Insight from across academic disciplines demonstrates that past and present changes to our social networks will have functional consequences across scales of organization. Given that the impacts of communication technology will transcend disciplinary lines, the scientific response must do so as well. Unsafe adoption of technology has the potential to both threaten wellbeing in the present and have lasting consequences for sustainability. Mitigating risk to ourselves and posterity requires a consolidated, crisis-focused study of human collective behavior.

Such an approach can benefit from lessons learned in other fields, including climate change and conservation biology, which are likewise required to provide actionable insight without the benefit of a complete understanding of the underlying dynamics. Integrating theoretical, descriptive, and empirical approaches will be necessary to bridge the gap between individual and large-scale behavior. There is reason to be hopeful that well-designed systems can promote healthy collective action at scale, as has been demonstrated in numerous contexts including the development of open-sourced software, curating Wikipedia, and the production of crowd-sourced maps. These examples not only provide proof that online collaboration can be productive, but also highlight means of measuring and defining success. Research in political communications has shown that while online movements and coordination are often prone to failure, when they succeed, the results can be dramatic. Quantifying benefits of online interaction, and limitations to harnessing these benefits, is a necessary step toward revealing the conditions that promote or undermine the value of communication technology.

Thursday, July 15, 2021

Overconfidence in news judgments is associated with false news susceptibility

B. A. Lyons, et al.
PNAS, Jun 2021, 118 (23) e2019527118
DOI: 10.1073/pnas.2019527118

Abstract

We examine the role of overconfidence in news judgment using two large nationally representative survey samples. First, we show that three in four Americans overestimate their relative ability to distinguish between legitimate and false news headlines; respondents place themselves 22 percentiles higher than warranted on average. This overconfidence is, in turn, correlated with consequential differences in real-world beliefs and behavior. We show that overconfident individuals are more likely to visit untrustworthy websites in behavioral data; to fail to successfully distinguish between true and false claims about current events in survey questions; and to report greater willingness to like or share false content on social media, especially when it is politically congenial. In all, these results paint a worrying picture: The individuals who are least equipped to identify false news content are also the least aware of their own limitations and, therefore, more susceptible to believing it and spreading it further.

Significance

Although Americans believe the confusion caused by false news is extensive, relatively few indicate having seen or shared it—a discrepancy suggesting that members of the public may not only have a hard time identifying false news but fail to recognize their own deficiencies at doing so. If people incorrectly see themselves as highly skilled at identifying false news, they may unwittingly participate in its circulation. In this large-scale study, we show that not only is overconfidence extensive, but it is also linked to both self-reported and behavioral measures of false news website visits, engagement, and belief. Our results suggest that overconfidence may be a crucial factor for explaining how false and low-quality information spreads via social media.