Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Persuasion. Show all posts
Showing posts with label Persuasion. Show all posts

Sunday, May 7, 2023

Stolen elections: How conspiracy beliefs during the 2020 American presidential elections changed over time

Wang, H., & Van Prooijen, J. (2022).
Applied Cognitive Psychology.


Conspiracy beliefs have been studied mostly through cross-sectional designs. We conducted a five-wave longitudinal study (N = 376; two waves before and three waves after the 2020 American presidential elections) to examine if the election results influenced specific conspiracy beliefs and conspiracy mentality, and whether effects differ between election winners (i.e., Biden voters) versus losers (i.e., Trump voters) at the individual level. Results revealed that conspiracy mentality kept unchanged over 2 months, providing first evidence that this indeed is a relatively stable trait. Specific conspiracy beliefs (outgroup and ingroup conspiracy beliefs) did change over time, however. In terms of group-level change, outgroup conspiracy beliefs decreased over time for Biden voters but increased for Trump voters. Ingroup conspiracy beliefs decreased over time across all voters, although those of Trump voters decreased faster. These findings illuminate how specific conspiracy beliefs are, and conspiracy mentality is not, influenced by an election event.

From the General Discussion

Most studies on conspiracy beliefs provide correlational evidence through cross-sectional designs (van Prooijen & Douglas, 2018). The present research took full advantage of the 2020 American presidential elections through a five-wave longitudinal design, enabling three complementary contributions. First, the results provide evidence that conspiracy mentality is a relatively stable individual difference trait (Bruder et al., 2013; Imhoff & Bruder, 2014): While the election did influence specific conspiracy beliefs (i.e., that the elections were rigged), it did not influence conspiracy mentality. Second, the results provide evidence for the notion that conspiracy beliefs are for election losers (Uscinski & Parent, 2014), as reflected in the finding that Biden voters' outgroup conspiracy beliefs decreased at the individual level, while Trump voters' did not. The group-level effects on changes in outgroup conspiracy beliefs also underscored the role of intergroup conflict in conspiracy theories (van Prooijen & Song, 2021). And third, the present research examined conspiracy theories about one's own political ingroup, and found that such ingroup conspiracy beliefs decreased over time.

The decrease over time for ingroup conspiracy beliefs occurred among both Biden and Trump voters. We speculate that, given its polarized nature and contested result, this election increased intergroup conflict between Biden and Trump voters. Such intergroup conflict may have increased feelings of ingroup loyalty within both voter groups (Druckman, 1994), therefore decreasing beliefs that members of one's own group were conspiring. Moreover, ingroup conspiracy beliefs were higher for Trump than Biden voters (particularly at the first measurement point). This difference might expand previous findings that Republicans are more susceptible to conspiracy cues than Democrats (Enders & Smallpage, 2019), by suggesting that these effects generalize to conspiracy cues coming from their own ingroup.


The 2020 American presidential elections yielded many conspiracy beliefs that the elections were rigged, and conspiracy beliefs generally have negative consequences for societies. One key challenge for scientists and policymakers is to establish how conspiracy theories develop over time. In this research, we conducted a longitudinal study to provide empirical insights into the temporal dynamics underlying conspiracy beliefs, in the setting of a polarized election. We conclude that specific conspiracy beliefs that the elections were rigged—but not conspiracy mentality—are malleable over time, depending on political affiliations and election results.

Tuesday, May 2, 2023

Lies and bullshit: The negative effects of misinformation grow stronger over time

Petrocelli, J. V., Seta, C. E., & Seta, J. J. (2023). 
Applied Cognitive Psychology, 37(2), 409–418. 


In a world where exposure to untrustworthy communicators is common, trust has become more important than ever for effective marketing. Nevertheless, we know very little about the long-term consequences of exposure to untrustworthy sources, such bullshitters. This research examines how untrustworthy sources—liars and bullshitters—influence consumer attitudes toward a product. Frankfurt's (1986) insidious bullshit hypothesis (i.e., bullshitting is evaluated less negatively than lying but bullshit can be more harmful than are lies) is examined within a traditional sleeper effect—a persuasive influence that increases, rather than decays over time. We obtained a sleeper effect after participants learned that the source of the message was either a liar or a bullshitter. However, compared to the liar source condition, the same message from a bullshitter resulted in more extreme immediate and delayed attitudes that were in line with an otherwise discounted persuasive message (i.e., an advertisement). Interestingly, attitudes returned to control condition levels when a bullshitter was the source of the message, suggesting that knowing an initially discounted message may be potentially accurate/inaccurate (as is true with bullshit, but not lies) does not result in the long-term discounting of that message. We discuss implications for marketing and other contexts of persuasion.

General Discussion

There is a considerable body of knowledge about the antecedents and consequences of lying in marketing and other contexts (e.g., Ekman, 1985), but much less is known about the other untrustworthy source: The Bullshitter. The current investigation suggests that the distinction between bullshitting and lying is important to marketing and to persuasion more generally. People are exposed to scores of lies and bullshit every day and this exposure has increased dramatically as the use of the internet has shifted from a platform for socializing to a source of information (e.g., Di Domenico et al., 2021). Because things such as truth status and source status fade faster than familiarity, illusory truth effects for consumer products can emerge after only 3 days post-initial exposure (Skurnik et al., 2005), and within the hour for basic knowledge questions (Fazio et al., 2015). As mirrored in our conditions that received discounting cues after the initial attitude information, at times people are lied to, or bullshitted, and only learn afterwards they were deceived. It is then that these untrustworthy sources appear to have a sleeper effect creating unwarranted and undiscounted attitudes.

It should be noted that our data do not suggest that the impact of lie and bullshit discounting cues fade differentially. However, the discounting cue in the bullshit condition had less of an immediate and long-term suppression effect than in the lie condition. In fact, after 14 days, the bullshit communication not only had more of an influence on attitudes, but the influence was not significantly different from that of the control communication. This finding suggests that bullshit can be more insidious than lies. As it relates to marketing, the insidious nature of exposure to bullshit can create false beliefs that subsequently affect behavior, even when people have been told that the information came from a person known to spread bullshit. The insidious nature of bullshit is magnified by the fact that even when it is clear that one is expressing his/her opinion via bullshit, people do not appear to hold the bullshitter to the same standard as the liar (Frankfurt, 1986). People may think that at least the bullshitter often believes his/her own bullshit, whereas the liar knows his/her statement is not true (Bernal, 2006; Preti, 2006; Reisch, 2006). Because of this difference, what may appear to be harmless communications from a bullshitter may have serious repercussions for consumers and organizations. Additionally, along with the research of Foos et al. (2016), the present research suggests that the harmful influence of untrustworthy sources may not be recognized initially but appears over time. The present research suggests that efforts to fight the consequences of fake news (see Atkinson, 2019) are more difficult because of the sleeper effect. The negative effects of unsubstantiated or false information may not only persist but may grow stronger over time.

Sunday, January 15, 2023

How Hedges Impact Persuasion

Oba, Demi and Berger, Jonah A.
(July 23, 2022). 


Communicators often hedge. Salespeople say that a product is probably the best, recommendation engines suggest movies they think you’ll like, and consumers say restaurants might have good service. But how does hedging impact persuasion? We suggest that different types of hedges may have different effects. Six studies support our theorizing, demonstrating that (1) the probabilistic likelihood hedges suggest and (2) whether they take a personal (vs. general) perspective both play an important role in driving persuasion. Further, the studies demonstrate that both effects are driven by a common mechanism: perceived confidence. Using hedges associated with higher likelihood, or that involve personal perspective, increases persuasion because they suggest communicators are more confident about what they are saying. This work contributes to the burgeoning literature on language in marketing, showcases how subtle linguistic features impact perceived confidence, and has clear implications for anyone trying to be more persuasive.

General Discussion

Communicating uncertainty is an inescapable part of marketplace interactions. Customer service representatives suggest solutions that “they think”will work, marketers inform buyers about risks a product “may” have, and consumers recommend restaurants that have the best food“in their opinion”.  Such communications are critical in determining which solutions are implemented, which products are bought, and which restaurants are visited.

But while it is clear that hedging is both frequent and important, less is known about its impact.  Do hedges always hurt persuasion?  If not, which hedges more or less persuasive, and why?

Six studies explore these questions. First, they demonstrate that different types of hedges have different effects. Consistent with our theorizing, hedges associated with higher likelihood of occurrence (Studies 1, 2A, 3, and 4A) or that take a personal (rather than general) perspective (Studies 1, 2B, 3, and 4B) are more persuasive. Further, hedges don’t always reduce persuasion (Studies 2A and 2B). Testing these effects using dozens of different hedges, across multiple domains, and using multiple measure of persuasion (including consequential choice) speaks to their robustness and generalizability.

Second, the studies demonstrate a common process that underlies these effects.  When communicators use hedges associated with higher likelihood, or a personal (rather than general) perspective, it makes them seem more confident. This, in turn, increases persuasion (Study 1, 3, 4A and 4B). Demonstrating these effects through mediation (Studies 1, 3, 4A and 4B) and moderation (Studies 4A and 4B) underscores robustness.Further, while other factors may contribute, the studies conducted here indicate full mediation by perceived confidence, highlighting its importance.

Psychologists and other mental health professionals may want to consider this research as part of psychotherapy.

Wednesday, January 12, 2022

Hidden wisdom or pseudo-profound bullshit? The effect of speaker admirability

Kara-Yakoubian, et al.
(2021, October 28).


How do people reason in response to ambiguous messages shared by admirable individuals? Using behavioral markers and self-report questionnaires, in two experiments (N = 571) we examined the influence of speakers’ admirability on meaning-seeking and wise reasoning in response to pseudo-profound bullshit. In both studies, statements that sounded superficially impressive but lacked intent to communicate meaning generated meaning-seeking, but only when delivered by high admirability speakers (e.g., the Dalai Lama) as compared to low admirability speakers (e.g., Kim Kardashian). The effect of speakers’ admirability on meaning-seeking was unique to pseudo-profound bullshit statements and was absent for mundane (Study 1) and motivational (Study 2) statements. In Study 2, participants also engaged in wiser reasoning for pseudo-profound bullshit (vs. motivational) statements and did more so when speakers were high in admirability. These effects occurred independently of the amount of time spent on statements or the complexity of participants’ reflections. It appears that pseudo-profound bullshit can promote epistemic reflection and certain aspects of wisdom, when associated with an admirable speaker.

From the General Discussion

Pseudo-profound language represents a type of misinformation (Čavojová et al., 2019b; Littrell et al., 2021; Pennycook & Rand, 2019a) where ambiguity reigns. Our findings suggest that source admirability could play an important role in the cognitive processing of ambiguous misinformation, including fake news (Pennycook & Rand, 2020) and euphemistic language (Walker et al., 2021). For instance, in the case of fake news, people may be more inclined to engage in epistemic reflection if the source of an article is highly admirable. However, we also observed that statements from high (vs. low) admirability sources were judged as more profound and were better liked. Extended to misinformation, a combination of greater perceived profundity, liking, and acquired meaning could potentially facilitate the sharing of ambiguous fake news content throughout social networks. Increased reflective thinking (as measured by the CRT) has also been linked to greater discernment on social media, with individuals who score higher on the CRT being less likely to believe fake news stories and share this type of content (Mosleh et al., 2021; Pennycook & Rand, 2019a). Perhaps, people might engage in more epistemic reflection if the source of an article is highly admirable, which may in turn predict a decrease in the sharing behaviour of fake news. Similarly, people may be more inclined to engage in epistemic reflection for euphemistic language, such as the term “enhanced interrogation” used in replacement of “torture,” and conclude that this type of language means something other than what it refers to, if used by a more admirable (compared to a less admirable) individual.

Saturday, January 30, 2021

Checked by reality, some QAnon supporters seek a way out

David Klepper
Associated Press
Originally posted 28 January 21

Here are two excerpts:

It's not clear exactly how many people believe some or all of the narrative, but backers of the movement were vocal in their support for Trump and helped fuel the insurrectionists who overran the U.S. Capitol this month. QAnon is also growing in popularity overseas.

Former believers interviewed by The Associated Press liken the process of leaving QAnon to kicking a drug addiction. QAnon, they say, offers simple explanations for a complicated world and creates an online community that provides escape and even friendship.

Smith's then-boyfriend introduced her to QAnon. It was all he could talk about, she said. At first she was skeptical, but she became convinced after the death of financier Jeffrey Epstein while in federal custody facing pedophilia charges. Officials debunked theories that he was murdered, but to Smith and other QAnon supporters, his suicide while facing child sex charges was too much to accept.

Soon, Smith was spending more time on fringe websites and on social media, reading and posting about the conspiracy theory. She said she fell for QAnon content that presented no evidence, no counter arguments, and yet was all too convincing.


“This isn't about critical thinking, of having a hypothesis and using facts to support it," Cohen said of QAnon believers. “They have a need for these beliefs, and if you take that away, because the storm did not happen, they could just move the goal posts.”

Some now say Trump's loss was always part of the plan, or that he secretly remains president, or even that Joe Biden's inauguration was created using special effects or body doubles. They insist that Trump will prevail, and powerful figures in politics, business and the media will be tried and possibly executed on live television, according to recent social media posts.

“Everyone will be arrested soon. Confirmed information,” read a post viewed 130,000 times this week on Great Awakening, a popular QAnon channel on Telegram. “From the very beginning I said it would happen.”

But a different tone is emerging in the spaces created for those who have heard enough.

“Hi my name is Joe,” one man wrote on a Q recovery channel in Telegram. “And I’m a recovering QAnoner.”

Tuesday, March 24, 2020

The effectiveness of moral messages on public health behavioral intentions during the COVID-19 pandemic

J. Everett, C. Colombatta, & others
PsyArXiv PrePrints
Originally posted 20 March 20

With the COVID-19 pandemic threatening millions of lives, changing our behaviors to prevent the spread of the disease is a moral imperative. Here, we investigated the effectiveness of messages inspired by three major moral traditions on public health behavioral intentions. A sample of US participants representative for age, sex and race/ethnicity (N=1032) viewed messages from either a leader or citizen containing deontological, virtue-based, utilitarian, or non-moral justifications for adopting social distancing behaviors during the COVID-19 pandemic. We measured the messages’ effects on participants’ self-reported intentions to wash hands, avoid social gatherings, self-isolate, and share health messages, as well as their beliefs about others’ intentions, impressions of the messenger’s morality and trustworthiness, and beliefs about personal control and responsibility for preventing the spread of disease. Consistent with our pre-registered predictions, deontological messages had modest effects across several measures of behavioral intentions, second-order beliefs, and impressions of the messenger, while virtue-based messages had modest effects on personal responsibility for preventing the spread. These effects were observed for messages from leaders and citizens alike. Our findings are at odds with participants’ own beliefs about moral persuasion: a majority of participants predicted the utilitarian message would be most effective. We caution that these effects are modest in size, likely due to ceiling effects on our measures of behavioral intentions and strong heterogeneity across all dependent measures along several demographic dimensions including age, self-identified gender, self-identified race, political conservatism, and religiosity. Although the utilitarian message was the least effective among those tested, individual differences in one key dimension of utilitarianism—impartial concern for the greater good—were strongly and positively associated with public health intentions and beliefs. Overall, our preliminary results suggest that public health messaging focused on duties and responsibilities toward family, friends and fellow citizens will be most effective in slowing the spread of COVID-19 in the US. Ongoing work is investigating whether deontological persuasion generalizes across different populations, what aspects of deontological messages drive their persuasive effects, and how such messages can be most effectively delivered across global populations.

The research is here.

Saturday, February 15, 2020

Influencing the physiology and decisions of groups: Physiological linkage during group decision-making

Related imageThorson, K. R., and others.
(2020). Group Processes & Intergroup Relations. 


Many of the most important decisions in our society are made within groups, yet we know little about how the physiological responses of group members predict the decisions that groups make. In the current work, we examine whether physiological linkage from “senders” to “receivers”—which occurs when a sender’s physiological response predicts a receiver’s physiological response—is associated with senders’ success at persuading the group to make a decision in their favor. We also examine whether experimentally manipulated status—an important predictor of social behavior—is associated with physiological linkage. In groups of 5, we randomly assigned 1 person to be high status, 1 low status, and 3 middle status. Groups completed a collaborative decision-making task that required them to come to a consensus on a decision to hire 1 of 5 firms. Unbeknownst to the 3 middle-status members, high- and low-status members surreptitiously were told to each argue for different firms. We measured cardiac interbeat intervals of all group members throughout the decision-making process to assess physiological linkage. We found that the more receivers were physiologically linked to senders, the more likely groups were to make a decision in favor of the senders. We did not find that people were physiologically linked to their group members as a function of their fellow group members’ status. This work identifies physiological linkage as a novel correlate of persuasion and highlights the need to understand the relationship between group members’ physiological responses during group decision-making.

Friday, January 31, 2020

Strength of conviction won’t help to persuade when people disagree

Brain areaPressor
Originally poste 16 Dec 19

The brain scanning study, published in Nature Neuroscience, reveals a new type of confirmation bias that can make it very difficult to alter people’s opinions.

“We found that when people disagree, their brains fail to encode the quality of the other person’s opinion, giving them less reason to change their mind,” said the study’s senior author, Professor Tali Sharot (UCL Psychology & Language Sciences).

For the study, the researchers asked 42 participants, split into pairs, to estimate house prices. They each wagered on whether the asking price would be more or less than a set amount, depending on how confident they were. Next, each lay in an MRI scanner with the two scanners divided by a glass wall. On their screens they were shown the properties again, reminded of their own judgements, then shown their partner’s assessment and wagers, and finally were asked to submit a final wager.

The researchers found that, when both participants agreed, people would increase their final wagers to larger amounts, particularly if their partner had placed a high wager.

Conversely, when the partners disagreed, the opinion of the disagreeing partner had little impact on people’s wagers, even if the disagreeing partner had placed a high wager.

The researchers found that one brain area, the posterior medial prefrontal cortex (pMFC), was involved in incorporating another person’s beliefs into one’s own. Brain activity differed depending on the strength of the partner’s wager, but only when they were already in agreement. When the partners disagreed, there was no relationship between the partner’s wager and brain activity in the pMFC region.

The info is here.

Monday, December 2, 2019

Neuroscientific evidence in the courtroom: a review.

Image result for neuroscience evidence in the courtroom"Aono, D., Yaffe, G. & Kober, H.
Cogn. Research 4, 40 (2019)


The use of neuroscience in the courtroom can be traced back to the early twentieth century. However, the use of neuroscientific evidence in criminal proceedings has increased significantly over the last two decades. This rapid increase has raised questions, among the media as well as the legal and scientific communities, regarding the effects that such evidence could have on legal decision makers. In this article, we first outline the history of neuroscientific evidence in courtrooms and then we provide a review of recent research investigating the effects of neuroscientific evidence on decision-making broadly, and on legal decisions specifically. In the latter case, we review studies that measure the effect of neuroscientific evidence (both imaging and nonimaging) on verdicts, sentencing recommendations, and beliefs of mock jurors and judges presented with a criminal case. Overall, the reviewed studies suggest mitigating effects of neuroscientific evidence on some legal decisions (e.g., the death penalty). Furthermore, factors such as mental disorder diagnoses and perceived dangerousness might moderate the mitigating effect of such evidence. Importantly, neuroscientific evidence that includes images of the brain does not appear to have an especially persuasive effect (compared with other neuroscientific evidence that does not include an image). Future directions for research are discussed, with a specific call for studies that vary defendant characteristics, the nature of the crime, and a juror’s perception of the defendant, in order to better understand the roles of moderating factors and cognitive mediators of persuasion.


The increased use of neuroscientific evidence in criminal proceedings has led some to wonder what effects such evidence has on legal decision makers (e.g., jurors and judges) who may be unfamiliar with neuroscience. There is some concern that legal decision makers may be unduly influenced by testimony and images related to the defendant’s brain. This paper briefly reviews the history of neuroscientific evidence in the courtroom to provide context for its current use. It then reviews the current research examining the influence of neuroscientific evidence on legal decision makers and potential moderators of such effects. Our synthesis of the findings suggests that neuroscientific evidence has some mitigating effects on legal decisions, although neuroimaging-based evidence does not hold any special persuasive power. With this in mind, we provide recommendations for future research in this area. Our review and conclusions have implications for scientists, legal scholars, judges, and jurors, who could all benefit from understanding the influence of neuroscientific evidence on judgments in criminal cases.

Friday, May 31, 2019

The Ethics of Smart Devices That Analyze How We Speak

Trevor Cox
Harvard Business Review
Originally posted May 20, 2019

Here is an excerpt:

But what happens when machines start analyzing how we talk? The big tech firms are coy about exactly what they are planning to detect in our voices and why, but Amazon has a patent that lists a range of traits they might collect, including identity (“gender, age, ethnic origin, etc.”), health (“sore throat, sickness, etc.”), and feelings, (“happy, sad, tired, sleepy, excited, etc.”).

This worries me — and it should worry you, too — because algorithms are imperfect. And voice is particularly difficult to analyze because the signals we give off are inconsistent and ambiguous. What’s more, the inferences that even humans make are distorted by stereotypes. Let’s use the example of trying to identify sexual orientation. There is a style of speaking with raised pitch and swooping intonations which some people assume signals a gay man. But confusion often arises because some heterosexuals speak this way, and many homosexuals don’t. Science experiments show that human aural “gaydar” is only right about 60% of the time. Studies of machines attempting to detect sexual orientation from facial images have shown a success rate of about 70%. Sound impressive? Not to me, because that means those machines are wrong 30% of the time. And I would anticipate success rates to be even lower for voices, because how we speak changes depending on who we’re talking to. Our vocal anatomy is very flexible, which allows us to be oral chameleons, subconsciously changing our voices to fit in better with the person we’re speaking with.

The info is here.

Monday, May 20, 2019

How Drug Companies Helped Shape a Shifting Biological View of Mental Ilness

Terry Gross
NPR Health Shots
Originally posted May 2, 2019

Here are two excerpts:

On why the antidepressant market is now at a standstill

The huge developments that happen in the story of depression and the antidepressants happens in the late '90s, when a range of different studies increasingly seemed to suggest that these antidepressants — although they're helping a lot of people — when compared to placebo versions of themselves, don't seem to do much better. And that is not because they are not helping people, but because the placebos are also helping people. Simply thinking you're taking Prozac, I guess, can have a powerful effect on your state of depression. In order, though, for a drug to get on the market, it's got to beat the placebo. If it can't beat the placebo, the drug fails.


On why pharmaceutical companies are leaving the psychiatric field

Because there have been no new good ideas as to where to look for new, novel biomarkers or targets since the 1960s. The only possible exception is there is now some excitement about ketamine, which targets a different set of biochemical systems. But R&D is very expensive. These drugs are now, mostly, off-patent. ... [The pharmaceutical companies'] efforts to bring on new drugs in that sort of tried-and-true and tested way — with a tinker here and a tinker there — has been running up against mostly unexplained but indubitable problems with the placebo effect.

The info is here.

Friday, April 12, 2019

It’s Not Enough to Be Right—You Also Have to Be Kind

Ryan Holiday
Originally posted on March 20, 2019

Here is an excerpt:

Reason is easy. Being clever is easy. Humiliating someone in the wrong is easy too. But putting yourself in their shoes, kindly nudging them to where they need to be, understanding that they have emotional and irrational beliefs just like you have emotional and irrational beliefs—that’s all much harder. So is not writing off other people. So is spending time working on the plank in your own eye than the splinter in theirs. We know we wouldn’t respond to someone talking to us that way, but we seem to think it’s okay to do it to other people.

There is a great clip of Joe Rogan talking during the immigration crisis last year. He doesn’t make some fact-based argument about whether immigration is or isn’t a problem. He doesn’t attack anyone on either side of the issue. He just talks about what it feels like—to him—to hear a mother screaming for the child she’s been separated from. The clip has been seen millions of times now and undoubtedly has changed more minds than a government shutdown, than the squabbles and fights on CNN, than the endless op-eds and think-tank reports.

Rogan doesn’t even tell anyone what to think. (Though, ironically, the clip was abused by plenty of editors who tried to make it partisan). He just says that if you can’t relate to that mom and her pain, you’re not on the right team. That’s the right way to think about it.

The info is here.

Tuesday, February 12, 2019

How to tell the difference between persuasion and manipulation

Robert Noggle
Originally published August 1, 2018

Here is an excerpt:

It appears, then, that whether an influence is manipulative depends on how it is being used. Iago’s actions are manipulative and wrong because they are intended to get Othello to think and feel the wrong things. Iago knows that Othello has no reason to be jealous, but he gets Othello to feel jealous anyway. This is the emotional analogue to the deception that Iago also practises when he arranges matters (eg, the dropped handkerchief) to trick Othello into forming beliefs that Iago knows are false. Manipulative gaslighting occurs when the manipulator tricks another into distrusting what the manipulator recognises to be sound judgment. By contrast, advising an angry friend to avoid making snap judgments before cooling off is not acting manipulatively, if you know that your friend’s judgment really is temporarily unsound. When a conman tries to get you to feel empathy for a non-existent Nigerian prince, he acts manipulatively because he knows that it would be a mistake to feel empathy for someone who does not exist. Yet a sincere appeal to empathy for real people suffering undeserved misery is moral persuasion rather than manipulation. When an abusive partner tries to make you feel guilty for suspecting him of the infidelity that he just committed, he is acting manipulatively because he is trying to induce misplaced guilt. But when a friend makes you feel an appropriate amount of guilt over having deserted him in his hour of need, this does not seem manipulative.

The info is here.

Thursday, January 24, 2019

What Could Be Wrong with a Little ‘Moral Clarity’?

Frank Guan
The New York Times Magazine
Originally posted January 2, 2019

If, in politics, words are weapons, they often prove themselves double—edged. So it was when, on the summer night that Alexandria Ocasio—Cortez learned that she had won a Democratic congressional primary over a 10-term incumbent, she provided a resonant quote to a TV reporter. “I think what we’ve seen is that working—class Americans want a clear champion,” she said, “and there is nothing radical about moral clarity in 2018.” Dozens of news videos and articles would cite those words as journalists worked to interpret what Ocasio—Cortez’s triumph, repeated in November’s general election, might represent for the American left and its newest star.

Until recently, “moral clarity” was more likely to signal combativeness toward the left, not from it: It served for decades as a badge of membership among conservative hawks and cultural crusaders. But in the Trump era, militant certainty takes precedence across the political spectrum. On the left, “moral clarity” can mean taking an unyielding stand against economic inequality or social injustice, climate change or gun violence. Closer to the center, it can take on a sonorous, transpartisan tone, as when Senator Robert Menendez, a Democrat, and former Speaker Paul Ryan, a Republican, each called for “moral clarity” in the White House reaction to the murder of the journalist Jamal Khashoggi. And it can fly beyond politics altogether, as when the surgeon and author Atul Gawande writes that better health care “does not take genius. It takes diligence. It takes moral clarity.” We hear about moral clarity any time there is impatience with equivocation, delay, conciliation and confusion — whenever people long for rapid action based on truths they hold to be self—evident.

The info is here.

Sunday, November 4, 2018

When Tech Knows You Better Than You Know Yourself

Nicholas Thompson
Originally published October 4, 2018

Here is an excerpt:

Hacking a Human

NT: Explain what it means to hack a human being and why what can be done now is different from what could be done 100 years ago.

YNH: To hack a human being is to understand what's happening inside you on the level of the body, of the brain, of the mind, so that you can predict what people will do. You can understand how they feel and you can, of course, once you understand and predict, you can usually also manipulate and control and even replace. And of course it can't be done perfectly and it was possible to do it to some extent also a century ago. But the difference in the level is significant. I would say that the real key is whether somebody can understand you better than you understand yourself. The algorithms that are trying to hack us, they will never be perfect. There is no such thing as understanding perfectly everything or predicting everything. You don't need perfect, you just need to be better than the average human being.

If you have an hour, please watch the video.

Friday, July 20, 2018

How to Look Away

Megan Garber
The Atlantic
Originally published June 20, 2018

Here is an excerpt:

It is a dynamic—the democratic alchemy that converts seeing things into changing them—that the president and his surrogates have been objecting to, as they have defended their policy. They have been, this week (with notable absences), busily appearing on cable-news shows and giving disembodied quotes to news outlets, insisting that things aren’t as bad as they seem: that the images and the audio and the evidence are wrong not merely ontologically, but also emotionally. Don’t be duped, they are telling Americans. Your horror is incorrect. The tragedy is false. Your outrage about it, therefore, is false. Because, actually, the truth is so much more complicated than your easy emotions will allow you to believe. Actually, as Fox News host Laura Ingraham insists, the holding pens that seem to house horrors are “essentially summer camps.” And actually, as Fox & Friends’ Steve Doocy instructs, the pens are not cages so much as “walls” that have merely been “built … out of chain-link fences.” And actually, Kirstjen Nielsen wants you to remember, “We provide food, medical, education, all needs that the child requests.” And actually, too—do not be fooled by your own empathy, Tom Cotton warns—think of the child-smuggling. And of MS-13. And of sexual assault. And of soccer fields. There are so many reasons to look away, so many other situations more deserving of your outrage and your horror.

It is a neat rhetorical trick: the logic of not in my backyard, invoked not merely despite the fact that it is happening in our backyard, but because of it. With seed and sod that we ourselves have planted.

Yes, yes, there are tiny hands, reaching out for people who are not there … but those are not the point, these arguments insist and assure. To focus on those images—instead of seeing the system, a term that Nielsen and even Trump, a man not typically inclined to think in networked terms, have been invoking this week—is to miss the larger point.

The article is here.

Thursday, April 12, 2018

The Tech Industry’s War on Kids

Richard Freed
Originally published March 12, 2018

Here is an excerpt:

Fogg speaks openly of the ability to use smartphones and other digital devices to change our ideas and actions: “We can now create machines that can change what people think and what people do, and the machines can do that autonomously.” Called “the millionaire maker,” Fogg has groomed former students who have used his methods to develop technologies that now consume kids’ lives. As he recently touted on his personal website, “My students often do groundbreaking projects, and they continue having impact in the real world after they leave Stanford… For example, Instagram has influenced the behavior of over 800 million people. The co-founder was a student of mine.”

Intriguingly, there are signs that Fogg is feeling the heat from recent scrutiny of the use of digital devices to alter behavior. His boast about Instagram, which was present on his website as late as January of 2018, has been removed. Fogg’s website also has lately undergone a substantial makeover, as he now seems to go out of his way to suggest his work has benevolent aims, commenting, “I teach good people how behavior works so they can create products & services that benefit everyday people around the world.” Likewise, the Stanford Persuasive Technology Lab website optimistically claims, “Persuasive technologies can bring about positive changes in many domains, including health, business, safety, and education. We also believe that new advances in technology can help promote world peace in 30 years.”

While Fogg emphasizes persuasive design’s sunny future, he is quite indifferent to the disturbing reality now: that hidden influence techniques are being used by the tech industry to hook and exploit users for profit. His enthusiastic vision also conveniently neglects to include how this generation of children and teens, with their highly malleable minds, is being manipulated and hurt by forces unseen.

The article is here.

Saturday, October 21, 2017

Thinking about the social cost of technology

Natasha Lomas
Tech Crunch
Originally posted September 30, 2017

Here is an excerpt:

Meanwhile, ‘users’ like my mum are left with another cryptic puzzle of unfamiliar pieces to try to slot back together and — they hope — return the tool to the state of utility it was in before everything changed on them again.

These people will increasingly feel left behind and unplugged from a society where technology is playing an ever greater day-to-day role, and also playing an ever greater, yet largely unseen role in shaping day to day society by controlling so many things we see and do. AI is the silent decision maker that really scales.

The frustration and stress caused by complex technologies that can seem unknowable — not to mention the time and mindshare that gets wasted trying to make systems work as people want them to work — doesn’t tend to get talked about in the slick presentations of tech firms with their laser pointers fixed on the future and their intent locked on winning the game of the next big thing.

All too often the fact that human lives are increasingly enmeshed with and dependent on ever more complex, and ever more inscrutable, technologies is considered a good thing. Negatives don’t generally get dwelled on. And for the most part people are expected to move along, or be moved along by the tech.

That’s the price of progress, goes the short sharp shrug. Users are expected to use the tool — and take responsibility for not being confused by the tool.

But what if the user can’t properly use the system because they don’t know how to? Are they at fault? Or is it the designers failing to properly articulate what they’ve built and pushed out at such scale? And failing to layer complexity in a way that does not alienate and exclude?

And what happens when the tool becomes so all consuming of people’s attention and so capable of pushing individual buttons it becomes a mainstream source of public opinion? And does so without showing its workings. Without making it clear it’s actually presenting a filtered, algorithmically controlled view.

There’s no newspaper style masthead or TV news captions to signify the existence of Facebook’s algorithmic editors. But increasingly people are tuning in to social media to consume news.

This signifies a major, major shift.

The article is here.

Friday, August 4, 2017

Re: Nudges in a Post-truth World

Guest Post: Nathan Hodson
Journal of Medical Ethics Blog
Originally posted July 19, 2017

Here is an excerpt:

As Levy notes, some people are concerned that nudges present a threat to autonomy. Attempts at reconciling nudges with ethics, then, are important because nudging in healthcare is here to stay but we need to ensure it is used in ways that respect autonomy (and other moral principles).

The term “nudge” is perhaps a misnomer. To fill out the concept a bit, it commonly denotes the use of behavioural economics and behavioural psychology to the construction of choice architecture through carefully designed trials. But every choice we face, in any context, already comes with a choice architecture: there are endless contextual factors that impact the decisions we make.

When we ask whether nudging is acceptable we are asking whether an arbitrary or random choice architecture is more acceptable than a deliberate choice architecture, or whether an uninformed choice architecture is better than one informed by research.

In fact the permissibility of a nudge derives from whether it is being used in an ethically acceptable way, something that can only be explored on an individual basis. Thaler and Sunstein locate ethical acceptability in promoting the health of the person being nudged (and call this Libertarian Paternalism — i.e. sensible choices are promoted but no option is foreclosed). An alternative approach was proposed by Mitchell: nudges are justified if they maximise future liberty. Either way the nudging itself is not inherently problematic.

The article is here.