Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Dishonesty. Show all posts
Showing posts with label Dishonesty. Show all posts

Tuesday, October 10, 2023

The Moral Case for No Longer Engaging With Elon Musk’s X

David Lee
Bloomberg.com
Originally published 5 October 23

Here is an excerpt:

Social networks are molded by the incentives presented to users. In the same way we can encourage people to buy greener cars with subsidies or promote healthy living by giving out smartwatches, so, too, can levers be pulled to improve the health of online life. Online, people can’t be told what to post, but sites can try to nudge them toward behaving in a certain manner, whether through design choices or reward mechanisms.

Under the previous management, Twitter at least paid lip service to this. In 2020, it introduced a feature that encouraged people to actually read articles before retweeting them, for instance, to promote “informed discussion.” Jack Dorsey, the co-founder and former chief executive officer, claimed to be thinking deeply about improving the quality of conversations on the platform — seeking ways to better measure and improve good discourse online. Another experiment was hiding the “likes” count in an attempt to train away our brain’s yearn for the dopamine hit we get from social engagement.

One thing the prior Twitter management didn’t do is actively make things worse. When Musk introduced creator payments in July, he splashed rocket fuel over the darkest elements of the platform. These kinds of posts always existed, in no small number, but are now the despicable main event. There’s money to be made. X’s new incentive structure has turned the site into a hive of so-called engagement farming — posts designed with the sole intent to elicit literally any kind of response: laughter, sadness, fear. Or the best one: hate. Hate is what truly juices the numbers.

The user who shared the video of Carson’s attack wasn’t the only one to do it. But his track record on these kinds of posts, and the inflammatory language, primed it to be boosted by the algorithm. By Tuesday, the user was still at it, making jokes about Carson’s girlfriend. All content monetized by advertising, which X desperately needs. It’s no mistake, and the user’s no fringe figure. In July, he posted that the site had paid him more than $16,000. Musk interacts with him often.


Here's my take: 

Lee pointed out that social networks can shape user behavior through incentives, and the previous management of Twitter had made some efforts to promote healthier online interactions. However, under Elon Musk's management, the platform has taken a different direction, actively encouraging provocative and hateful content to boost engagement.

Lee criticized the new incentive structure on X, where users are financially rewarded for producing controversial content. They argued that as the competition for attention intensifies, the content will likely become more violent and divisive.

Lee also mentioned an incident involving former executive Yoel Roth, who raised concerns about hate speech on the platform, and Musk's dismissive response to those concerns.  Musk is not a business genius and does not understand how to promote a healthy social media site.

Monday, October 9, 2023

They Studied Dishonesty. Was Their Work a Lie?

Gideon Lewis-Kraus
The New Yorker
Originally published 30 Sept 23

Here is an excerpt:

Despite a good deal of readily available evidence to the contrary, neoclassical economics took it for granted that humans were rational. Kahneman and Tversky found flaws in this assumption, and built a compendium of our cognitive biases. We rely disproportionately on information that is easily retrieved: a recent news article about a shark attack seems much more relevant than statistics about how rarely such attacks actually occur. Our desires are in flux—we might prefer pizza to hamburgers, and hamburgers to nachos, but nachos to pizza. We are easily led astray by irrelevant details. In one experiment, Kahneman and Tversky described a young woman who had studied philosophy and participated in anti-nuclear demonstrations, then asked a group of participants which inference was more probable: either “Linda is a bank teller” or “Linda is a bank teller and is active in the feminist movement.” More than eighty per cent chose the latter, even though it is a subset of the former. We weren’t Homo economicus; we were giddy and impatient, our thoughts hasty, our actions improvised. Economics tottered.

Behavioral economics emerged for public consumption a generation later, around the time of Ariely’s first book. Where Kahneman and Tversky held that we unconsciously trick ourselves into doing the wrong thing, behavioral economists argued that we might, by the same token, be tricked into doing the right thing. In 2008, Richard Thaler and Cass Sunstein published “Nudge,” which argued for what they called “libertarian paternalism”—the idea that small, benign alterations of our environment might lead to better outcomes. When employees were automatically enrolled in 401(k) programs, twice as many saved for retirement. This simple bureaucratic rearrangement improved a great many lives.

Thaler and Sunstein hoped that libertarian paternalism might offer “a real Third Way—one that can break through some of the least tractable debates in contemporary democracies.” Barack Obama, who hovered above base partisanship, found much to admire in the promise of technocratic tinkering. He restricted his outfit choices mostly to gray or navy suits, based on research into “ego depletion,” or the concept that one might exhaust a given day’s reservoir of decision-making energy. When, in the wake of the 2008 financial crisis, Obama was told that money “framed” as income was more likely to be spent than money framed as wealth, he enacted monthly tax deductions instead of sending out lump-sum stimulus checks. He eventually created a behavioral-sciences team in the White House. (Ariely had once found that our decisions in a restaurant are influenced by whoever orders first; it’s possible that Obama was driven by the fact that David Cameron, in the U.K., was already leaning on a “nudge unit.”)

The nudge, at its best, was modest—even a minor potential benefit at no cost pencilled out. In the Obama years, a pop-up on computers at the Department of Agriculture reminded employees that single-sided printing was a waste, and that advice reduced paper use by six per cent. But as these ideas began to intermingle with those in the adjacent field of social psychology, the reasonable notion that some small changes could have large effects at scale gave way to a vision of individual human beings as almost boundlessly pliable. Even Kahneman was convinced. He told me, “People invented things that shouldn’t have worked, and they were working, and I was enormously impressed by it.” Some of these interventions could be implemented from above. 


Monday, October 2, 2023

Research: How One Bad Employee Can Corrupt a Whole Team

Stephen Dimmock & William Gerken
Harvard Business Review
Originally posted 5 March 2018

Here is an excerpt:

In our research, we wanted to understand just how contagious bad behavior is. To do so, we examined peer effects in misconduct by financial advisors, focusing on mergers between financial advisory firms that each have multiple branches. In these mergers, financial advisors meet new co-workers from one of the branches of the other firm, exposing them to new ideas and behaviors.

We collected an extensive data set using the detailed regulatory filings available for financial advisors. We defined misconduct as customer complaints for which the financial advisor either paid a settlement of at least $10,000 or lost an arbitration decision. We observed when complaints occurred for each financial advisor, as well as for the advisor’s co-workers.

We found that financial advisors are 37% more likely to commit misconduct if they encounter a new co-worker with a history of misconduct. This result implies that misconduct has a social multiplier of 1.59 — meaning that, on average, each case of misconduct results in an additional 0.59 cases of misconduct through peer effects.

However, observing similar behavior among co-workers does not explain why this similarity occurs. Co-workers could behave similarly because of peer effects – in which workers learn behaviors or social norms from each other — but similar behavior could arise because co-workers face the same incentives or because individuals prone to making similar choices naturally choose to work together.

In our research, we wanted to understand how peer effects contribute to the spread of misconduct. We compared financial advisors across different branches of the same firm, because this allowed us to control for the effect of the incentive structure faced by all advisors in the firm. We also focused on changes in co-workers caused by mergers, because this allowed us to remove the effect of advisors choosing their co-workers. As a result, we were able to isolate peer effects.


Here is my summary: 

The article discusses a study that found that even the most honest employees are more likely to commit misconduct if they work alongside a dishonest individual. The study, which was conducted by researchers at the University of California, Irvine, found that financial advisors were 37% more likely to commit misconduct if they encountered a new co-worker with a history of misconduct.

The researchers believe that this is because people are more likely to learn bad behavior than good behavior. When we see someone else getting away with misconduct, it can make us think that it's okay to do the same thing. Additionally, when we're surrounded by people who are behaving badly, it can create a culture of acceptance for misconduct.

Tuesday, April 4, 2023

Chapter One - Moral inconsistency

Effron, D.A, & Helgason, B.A. 
Advances in Experimental Social Psychology
Volume 67, 2023, Pages 1-72

Abstract

We review a program of research examining three questions. First, why is the morality of people's behavior inconsistent across time and situations? We point to people's ability to convince themselves they have a license to sin, and we demonstrate various ways people use their behavioral history and others—individuals, groups, and society—to feel licensed. Second, why are people's moral judgments of others' behavior inconsistent? We highlight three factors: motivation, imagination, and repetition. Third, when do people tolerate others who fail to practice what they preach? We argue that people only condemn others' inconsistency as hypocrisy if they think the others are enjoying an “undeserved moral benefit.” Altogether, this program of research suggests that people are surprisingly willing to enact and excuse inconsistency in their moral lives. We discuss how to reconcile this observation with the foundational social psychological principle that people hate inconsistency.

(cut)

The benefits of moral inconsistency

The present chapter has focused on the negative consequences of moral inconsistency. We have highlighted how the factors that promote moral inconsistency can allow people to lie, cheat, express prejudice, and reduce their condemnation of others' morally suspect behaviors ranging from leaving the scene of an accident to spreading fake news. At the same time, people's apparent proclivity for moral inconsistency is not all bad.

One reason is that, in situations that pit competing moral values against each other, moral inconsistency may be unavoidable. For example, when a friend asks whether you like her unflattering new haircut, you must either say no (which would be inconsistent with your usual kind behavior) or yes (which would be inconsistent with your usual honest behavior; Levine, Roberts, & Cohen, 2020). If you discover corruption in your workplace, you might need to choose between blowing the whistle (which would be inconsistent with your typically loyal behavior toward the company) or staying silent (which would be inconsistent with your typically fair behavior; Dungan, Waytz, & Young, 2015; Waytz, Dungan, & Young, 2013).

Another reason is that people who strive for perfect moral consistency may incur steep costs. They may be derogated and shunned by others, who feel threatened and judged by these “do-gooders” (Howe & Monin, 2017; Minson & Monin, 2012; Monin, Sawyer, & Marquez, 2008; O’Connor & Monin, 2016). Or they may sacrifice themselves and loved ones more than they can afford, like the young social worker who consistently donated to charity until she and her partner were living on 6% of their already-modest income, or the couple who, wanting to consistently help children in need of a home, adopted 22 kids (MacFarquhar, 2015). In short, we may enjoy greater popularity and an easier life if we allow ourselves at least some moral inconsistency.

Finally, moral inconsistency can sometimes benefit society. Evolving moral beliefs about smoking (Rozin, 1999; Rozin & Singh, 1999) have led to considerable public health benefits. Stalemates in partisan conflict are hard to break if both sides rigidly refuse to change their judgments and behavior surrounding potent moral issues (Brandt, Wetherell, & Crawford, 2016). Same-sex marriage, women's sexual liberation, and racial desegregation required inconsistency in how people treated actions that were once considered wrong. In this way, moral inconsistency may be necessary for moral progress.

Monday, March 6, 2023

Cognitive control and dishonesty

Speer, S. P., Smidts, A., & Boksem, M. A. (2022b).
Trends in Cognitive Sciences, 26(9), 796–808.
https://doi.org/10.1016/j.tics.2022.06.005

Abstract

Dishonesty is ubiquitous and imposes substantial financial and social burdens on society. Intuitively, dishonesty results from a failure of willpower to control selfish behavior. However, recent research suggests that the role of cognitive control in dishonesty is more complex. We review evidence that cognitive control is not needed to be honest or dishonest per se, but that it depends on individual differences in what we call one’s ‘moral default’: for those who are prone to dishonesty, cognitive control indeed aids in being honest, but for those who are already generally honest, cognitive control may help them cheat to occasionally profit from small acts of dishonesty. Thus, the role of cognitive control in (dis)honesty is to override the moral default.

Significance

The precise role of cognitive control in dishonesty has been debated for many years, but now important strides have been made to resolve this debate.

Recently developed paradigms that allow for investigating dishonesty on the level of the choice rather than on the level of the individual have substantially improved our understanding of the adaptive role of cognitive control in (dis)honesty.

These new paradigms revealed that the role of cognitive control differs across people: for cheaters, it helps them to sometimes be honest, while for those who are generally honest, it allows them to cheat on occasion. Thus, cognitive control is not required for (dis)honesty per se but is required to override one’s moral default to be either honest or to cheat.

Individual differences in moral default are driven by balancing motivation for reward and upholding a moral self-image.

From Concluding remarks

The Will and Grace hypotheses have been debated for quite some time, but recently important strides have been made to resolve this debate. Key elements in this proposed resolution are (i) recognizing that there is heterogeneity between individuals, some default more towards honesty, whereas others have a stronger inclination towards dishonesty; (ii) recognizing that there is heterogeneity within individuals, cheaters can be honest sometimes and honest people do cheat on occasion; and (iii) the development of experimental paradigms that allow dishonesty to be investigated on the level of the choice, rather than only on the level of the individual or the group. These developments have substantially enhanced understanding of the role of cognitive control in (dis)honesty: it is not required for being honest or dishonest per se, but it is required to override one’s moral default to either be honest or to cheat (Figure 1).

These insights open up novel research agendas and offer suggestions as to how to develop interventions to curtail dishonesty. Our review suggests three processes that may be targeted by such interventions: reward seeking, self-referential thinking, and cognitive control. Shaping contexts in ways that are conducive to honesty by targeting these processes may go a long way to increase honesty in everyday behavior.

Wednesday, March 1, 2023

Cognitive Control Promotes Either Honesty or Dishonesty, Depending on One's Moral Default

Speer, S. P., Smidts, A., & Boksem, M. A. S. (2021).
The Journal of Neuroscience, 41(42), 8815–8825. 
https://doi.org/10.1523/jneurosci.0666-21.2021

Abstract

Cognitive control is crucially involved in making (dis)honest decisions. However, the precise nature of this role has been hotly debated. Is honesty an intuitive response, or is will power needed to override an intuitive inclination to cheat? A reconciliation of these conflicting views proposes that cognitive control enables dishonest participants to be honest, whereas it allows those who are generally honest to cheat. Thus, cognitive control does not promote (dis)honesty per se; it depends on one's moral default. In the present study, we tested this proposal using electroencephalograms in humans (males and females) in combination with an independent localizer (Stroop task) to mitigate the problem of reverse inference. Our analysis revealed that the neural signature evoked by cognitive control demands in the Stroop task can be used to estimate (dis)honest choices in an independent cheating task, providing converging evidence that cognitive control can indeed help honest participants to cheat, whereas it facilitates honesty for cheaters.

Significance Statement

Dishonesty causes enormous economic losses. To target dishonesty with interventions, a rigorous understanding of the underlying cognitive mechanisms is required. A recent study found that cognitive control enables honest participants to cheat, whereas it helps cheaters to be honest. However, it is evident that a single study does not suffice as support for a novel hypothesis. Therefore, we tested the replicability of this finding using a different modality (EEG instead of fMRI) together with an independent localizer task to avoid reverse inference. We find that the same neural signature evoked by cognitive control demands in the localizer task can be used to estimate (dis)honesty in an independent cheating task, establishing converging evidence that the effect of cognitive control indeed depends on a person's moral default.

From the Discussion section

Previous research has deduced the involvement of cognitive control in moral decision-making through relating observed activations to those observed for cognitive control tasks in prior studies (Greene and Paxton, 2009; Abe and Greene, 2014) or with the help of meta-analytic evidence (Speer et al., 2020) from the Neurosynth platform (Yarkoni et al., 2011). This approach, which relies on reverse inference, must be used with caution because any given brain area may be involved in several different cognitive processes, which makes it difficult to conclude that activation observed in a particular brain area represents one specific function (Poldrack, 2006). Here, we extend prior research by providing more rigorous evidence by means of explicitly eliciting cognitive control in a separate localizer task and then demonstrating that this same neural signature can be identified in the Spot-The-Difference task when participants are exposed to the opportunity to cheat. Moreover, using similarity analysis we provide a direct link between the neural signature of cognitive control, as elicited by the Stroop task, and (dis)honesty by showing that time-frequency patterns of cognitive control demands in the Stroop task are indeed similar to those observed when tempted to cheat in the Spot-The-Difference task. These results provide strong evidence that cognitive control processes are recruited when individuals are tempted to cheat.

Friday, February 10, 2023

Individual differences in (dis)honesty are represented in the brain's functional connectivity at rest

Speer, S. P., Smidts, A., & Boksem, M. A. (2022).
NeuroImage, 246, 118761.
https://doi.org/10.1016/j.neuroimage.2021.118761

Abstract

Measurement of the determinants of socially undesirable behaviors, such as dishonesty, are complicated and obscured by social desirability biases. To circumvent these biases, we used connectome-based predictive modeling (CPM) on resting state functional connectivity patterns in combination with a novel task which inconspicuously measures voluntary cheating to gain access to the neurocognitive determinants of (dis)honesty. Specifically, we investigated whether task-independent neural patterns within the brain at rest could be used to predict a propensity for (dis)honest behavior. Our analyses revealed that functional connectivity, especially between brain networks linked to self-referential thinking (vmPFC, temporal poles, and PCC) and reward processing (caudate nucleus), reliably correlates, in an independent sample, with participants’ propensity to cheat. Participants who cheated the most also scored highest on several self-report measures of impulsivity which underscores the generalizability of our results. Notably, when comparing neural and self-report measures, the neural measures were found to be more important in predicting cheating propensity.

Significance statement

Dishonesty pervades all aspects of life and causes enormous economic losses. However, because the underlying mechanisms of socially undesirable behaviors are difficult to measure, the neurocognitive determinants of individual differences in dishonesty largely remain unknown. Here, we apply machine-learning methods to stable patterns of neural connectivity to investigate how dispositions toward (dis)honesty, measured by an innovative behavioral task, are encoded in the brain. We found that stronger connectivity between brain regions associated with self-referential thinking and reward are predictive of the propensity to be honest. The high predictive accuracy of our machine-learning models, combined with the reliable nature of resting-state functional connectivity, which is uncontaminated by the social-desirability biases to which self-report measures are susceptible, provides an excellent avenue for the development of useful neuroimaging-based biomarkers of socially undesirable behaviors.

Discussion

Employing connectome-based predictive modeling (CPM) in combination with the innovative Spot-The-Differences task, which allows for inconspicuously measuring cheating, we identified a functional connectome that reliably predicts a disposition toward (dis)honesty in an independent sample. We observed a Pearson correlation between out-of-sample predicted and actual cheatcount (r = 0.40) that resides on the higher side of the typical range of correlations (between r = 0.2 and r = 0.5) reported in previous studies employing CPM (Shen et al., 2017). Thus, functional connectivity within the brain at rest predicts whether someone is more honest or more inclined to cheat in our task.

In light of previous research on moral decisions, the regions we identified in our resting state analysis can be associated with two networks frequently found to be involved in moral decision making. First, the vmPFC, the bilateral temporal poles and the PCC have consistently been associated with self-referential thinking. For example, it has been found that functional connectivity between these areas during rest is associated with higher-level metacognitive operations such as self-reflection, introspection and self-awareness (Gusnard et al., 2001; Meffert et al., 2013; Northoff et al., 2006; Vanhaudenhuyse et al., 2011). Secondly, the caudate nucleus, which has been found to be involved in anticipation and valuation of rewards (Ballard and Knutson, 2009; Knutson et al., 2001) can be considered an important node in the reward network (Bartra et al., 2013). Participants with higher levels of activation in the reward network, in anticipation of rewards, have previously been found to indeed be more dishonest (Abe and Greene, 2014).

Tuesday, September 20, 2022

The Look Over Your Shoulder: Unethical Behaviour Decreases in the Physical Presence of Observers

Köbis, N., van der Lingen, S., et al., (2019, February 5).
https://doi.org/10.31234/osf.io/gxu96

Abstract

Research in behavioural ethics repeatedly emphasizes the importance of others for people’s decisions to break ethical rules. Yet, in most lab experiments participants faced ethical dilemmas in full privacy settings. We conducted three experiments in which we compare such private set-ups to situations in which a second person is co-present in the lab. Study 1 manipulated whether that second person was a mere observer or co-benefitted from the participants’ unethical behaviour. Study 2 investigated social proximity between participant and observer –being a friend versus a stranger. Study 3 tested whether the mere presence of another person who cannot observe the participant’s behaviour suffices to decrease unethical behaviour. By using different behavioural paradigms of unethical behaviour, we obtain three main results: first, the presence of an observing other curbs unethical behaviour. Second, neither the payoff structure (Study 1) nor the social proximity towards the observing other (Study 2) qualifies this effect. Third, the mere presence of others does not reduce unethical behaviour if they do not observe the participant (Study 3). Implications, limitations and avenues for future research are discussed.

General Discussion

Taken together, the results of three experiments suggest that the physical presence of others reduces unethical behaviour, yet only if that other person can actually observe the behaviour. Even though the second person had no means to formally sanction wrong-doing, onlookers’ presence curtailed unethical behaviour while the local social utility (co-beneficiary or observer, Study 1) and the level of proximity (friend vs. stranger,Study 2) played a less important role. When others are merely present without being able to observe, no such attenuating effect on unethical behaviour occurs(Study 3).  Introducing the physical presence of another person to the rapidly growing stream of behavioural ethics research, our experiments provide some of the first empirical insights into the actual social aspects of unethical behaviour.

Humans are social animals who spend a substantial proportion of their time in company. Many decisions are made while being in the presence or in the gaze of others. At the same time, the overwhelming majority of lab experiments in behavioural ethics consists of individuals making decisions in isolation(for a meta-analysis, see Abeler et al., 2016). Also field experiments have sparsely looked at the impact of the tangible social elements of unethical behaviour (for a review, see Pierce & Balasubramanian, 2015). Nevertheless, the behavioural ethics literature emphasizes that appearing moral towards others is one of the main explanatory factor to explain when and how people break ethical rules (Mazar, Amir, & Ariely, 2008; Pillutla & Murnighan, 1995). Yet, so far behavioural research on the presence and observability of actual others remains sparse. Providing some of the first insights into how the physical presence of others shape our moral compass can contribute to the advancement of behavioural ethics and potentially inform the design of practical interventions. 


Direct application to those who practice independently.

Saturday, September 10, 2022

Social norms and dishonesty across societies

Aycinena, D., et al.
PNAS, 119 (31), 2022.

Abstract

Social norms have long been recognized as an important factor in curtailing antisocial behavior, and stricter prosocial norms are commonly associated with increased prosocial behavior. In this study, we provide evidence that very strict prosocial norms can have a perverse negative relationship with prosocial behavior. In laboratory experiments conducted in 10 countries across 5 continents, we measured the level of honest behavior and elicited injunctive norms of honesty. We find that individuals who hold very strict norms (i.e., those who perceive a small lie to be as socially unacceptable as a large lie) are more likely to lie to the maximal extent possible. This finding is consistent with a simple behavioral rationale. If the perceived norm does not differentiate between the severity of a lie, lying to the full extent is optimal for a norm violator since it maximizes the financial gain, while the perceived costs of the norm violation are unchanged. We show that the relation between very strict prosocial norms and high levels of rule violations generalizes to civic norms related to common moral dilemmas, such as tax evasion, cheating on government benefits, and fare dodging on public transportation. Those with very strict attitudes toward civic norms are more likely to lie to the maximal extent possible. A similar relation holds across countries. Countries with a larger fraction of people with very strict attitudes toward civic norms have a higher society-level prevalence of rule violations.

Significance

Much of the research in the experimental and behavioral sciences finds that stronger prosocial norms lead to higher levels of prosocial behavior. Here, we show that very strict prosocial norms are negatively correlated with prosocial behavior. Using laboratory experiments on honesty, we demonstrate that individuals who hold very strict norms of honesty are more likely to lie to the maximal extent. Further, countries with a larger fraction of people with very strict civic norms have proportionally more societal-level rule violations. We show that our findings are consistent with a simple behavioral rationale. If perceived norms are so strict that they do not differentiate between small and large violations, then, conditional on a violation occurring, a large violation is individually optimal.


In essence, very strict social norms can backfire.  People can lie to the fullest extent with similar costs to minimal lying.

Saturday, August 13, 2022

The moral psychology of misinformation: Why we excuse dishonesty in a post-truth world

Effron, D.A., & Helgason, B. A.
Current Opinion in Psychology
Volume 47, October 2022, 101375

Abstract

Commentators say we have entered a “post-truth” era. As political lies and “fake news” flourish, citizens appear not only to believe misinformation, but also to condone misinformation they do not believe. The present article reviews recent research on three psychological factors that encourage people to condone misinformation: partisanship, imagination, and repetition. Each factor relates to a hallmark of “post-truth” society: political polarization, leaders who push “alterative facts,” and technology that amplifies disinformation. By lowering moral standards, convincing people that a lie's “gist” is true, or dulling affective reactions, these factors not only reduce moral condemnation of misinformation, but can also amplify partisan disagreement. We discuss implications for reducing the spread of misinformation.

Repeated exposure to misinformation reduces moral condemnation

A third hallmark of a post-truth society is the existence of technologies, such as social media platforms, that amplify misinformation. Such technologies allow fake news – “articles that are intentionally and verifiably false and that could mislead readers” – to spread fast and far, sometimes in multiple periods of intense “contagion” across time. When fake news does “go viral,” the same person is likely to encounter the same piece of misinformation multiple times. Research suggests that these multiple encounters may make the misinformation seem less unethical to spread.

Conclusion

In a post-truth world, purveyors of misinformation need not convince the public that their lies are true. Instead, they can reduce the moral condemnation they receive by appealing to our politics (partisanship), convincing us a falsehood could have been true or might become true in the future (imagination), or simply exposing us to the same misinformation multiple times (repetition). Partisanship may lower moral standards, partisanship and imagination can both make the broader meaning of the falsehood seem true, and repetition can blunt people's negative affective reaction to falsehoods (see Figure 1). Moreover, because partisan alignment strengthens the effects of imagination and facilitates repeated contact with falsehoods, each of these processes can exacerbate partisan divisions in the moral condemnation of falsehoods. Understanding these effects and their pathways informs interventions aimed at reducing the spread of misinformation.

Ultimately, the line of research we have reviewed offers a new perspective on our post-truth world. Our society is not just post-truth in that people can lie and be believed. We are post-truth in that it is concerningly easy to get a moral pass for dishonesty – even when people know you are lying.

Saturday, September 18, 2021

Fraudulent data raise questions about superstar honesty researcher

Cathleen O'Grady
Sciencemag.com
Originally posted 24 Aug 21

Here is an excerpt:

Some time later, a group of anonymous researchers downloaded those data, according to last week’s post on Data Colada. A simple look at the participants’ mileage distribution revealed something very suspicious. Other data sets of people’s driving distances show a bell curve, with some people driving a lot, a few very little, and most somewhere in the middle. In the 2012 study, there was an unusually equal spread: Roughly the same number of people drove every distance between 0 and 50,000 miles. “I was flabbergasted,” says the researcher who made the discovery. (They spoke to Science on condition of anonymity because of fears for their career.)

Worrying that PNAS would not investigate the issue thoroughly, the whistleblower contacted the Data Colada bloggers instead, who conducted a follow-up review that convinced them the field study results were statistically impossible.

For example, a set of odometer readings provided by customers when they first signed up for insurance, apparently real, was duplicated to suggest the study had twice as many participants, with random numbers between one and 1000 added to the original mileages to disguise the deceit. In the spreadsheet, the original figures appeared in the font Calibri, but each had a close twin in another font, Cambria, with the same number of cars listed on the policy, and odometer readings within 1000 miles of the original. In 1 million simulated versions of the experiment, the same kind of similarity appeared not a single time, Simmons, Nelson, and Simonsohn found. “These data are not just excessively similar,” they write. “They are impossibly similar.”

Ariely calls the analysis “damning” and “clear beyond doubt.” He says he has requested a retraction, as have his co-authors, separately. “We are aware of the situation and are in communication with the authors,” PNAS Editorial Ethics Manager Yael Fitzpatrick said in a statement to Science.

Three of the authors say they were only involved in the two lab studies reported in the paper; a fourth, Boston University behavioral economist Nina Mazar, forwarded the Data Colada investigators a 16 February 2011 email from Ariely with an attached Excel file that contains the problems identified in the blog post. Its metadata suggest Ariely had created the file 3 days earlier.

Ariely tells Science he made a mistake in not checking the data he received from the insurance company, and that he no longer has the company’s original file. He says Duke’s integrity office told him the university’s IT department does not have email records from that long ago. His contacts at the insurance company no longer work there, Ariely adds, but he is seeking someone at the company who could find archived emails or files that could clear his name. His publication of the full data set last year showed he was unaware of any problems with it, he says: “I’m not an idiot. This is a very easy fraud to catch.”

Tuesday, January 26, 2021

Publish or Be Ethical? 2 Studies of Publishing Pressure & Scientific Misconduct in Research

Paruzel-Czachura M, Baran L, & Spendel Z. 
Research Ethics. December 2020. 

Abstract

The paper reports two studies exploring the relationship between scholars’ self-reported publication pressure and their self-reported scientific misconduct in research. In Study 1 the participants (N = 423) were scholars representing various disciplines from one big university in Poland. In Study 2 the participants (N = 31) were exclusively members of the management, such as dean, director, etc. from the same university. In Study 1 the most common reported form of scientific misconduct was honorary authorship. The majority of researchers (71%) reported that they had not violated ethical standards in the past; 3% admitted to scientific misconduct; 51% reported being were aware of colleagues’ scientific misconduct. A small positive correlation between perceived publication pressure and intention to engage in scientific misconduct in the future was found. In Study 2 more than half of the management (52%) reported being aware of researchers’ dishonest practices, the most frequent one of these being honorary authorship. As many as 71% of the participants report observing publication pressure in their subordinates. The primary conclusions are: (1) most scholars are convinced of their morality and predict that they will behave morally in the future; (2) scientific misconduct, particularly minor offenses such as honorary authorship, is frequently observed both by researchers (particularly in their colleagues) and by their managers; (3) researchers experiencing publication pressure report a willingness to engage in scientific misconduct in the future.

Conclusion

Our findings suggest that the notion of “publish or be ethical?” may constitute a real dilemma for the researchers. Although only 3% of our sample admitted to having engaged in scientific misconduct, 71% reported that they definitely had not violated ethical standards in the past. Furthermore, more than a half (51%) reported seeing scientific misconduct among their colleagues. We did not find a correlation between unsatisfactory work conditions and scientific misconduct, but we did find evidence to support the theory that perceived pressure to collect points is correlated with willingness to exceed ethical standards in the future.

Saturday, November 21, 2020

Unethical amnesia responds more to instrumental than to hedonic motives

Galeotti, F, Saucet, C., & Villeval, M. C.
PNAS, October 13, 2020 117 (41) 25423-25428; 
first published September 28, 2020; 

Abstract

Humans care about morality. Yet, they often engage in actions that contradict their moral self. Unethical amnesia is observed when people do not remember or remember less vividly these actions. This paper explores two reasons why individuals may experience unethical amnesia. Forgetting past unethical behavior may be motivated by purely hedonic or affective reasons, such as the willingness to maintain one’s moral self-image, but also by instrumental or strategic motives, in anticipation of future misbehavior. In a large-scale incentivized online experiment (n = 1,322) using a variant of a mind game, we find that hedonic considerations are not sufficient to motivate the forgetting of past cheating behavior. This is confirmed in a follow-up experiment (n = 1,005) in which recalls are elicited the same day instead of 3 wk apart. However, when unethical amnesia can serve as a justification for a future action, such as deciding on whether to keep undeserved money, motivated forgetting is more likely. Thereby, we show that motivated forgetting occurs as a self-excuse to justify future immoral decisions.

Significance

Using large-scale incentivized online experiments, we tested two possible origins of individuals’ forgetting about their past cheating behavior in a mind game. We found that purely hedonic considerations, such as the maintenance of a positive self-image, are not sufficient to motivate unethical amnesia, but the addition of an instrumental value to forgetting triggers such amnesia. Individuals forget their past lies more when amnesia can serve as an excuse not to engage in future morally responsible behavior. These findings shed light on the interplay between dishonesty and memory and suggest further investigations of the cost function of unethical amnesia. A policy implication is that improving ethics requires making unethical amnesia more difficult for individuals.

Tuesday, March 24, 2020

Sen. Kelly Loeffler Dumped Millions in Stock After Coronavirus Briefing

Image result for loeffler stock saleL. Markay, W. Bredderman, & S. Bordy
thedailybeast.com
Updated 20 March 20

The Senate’s newest member sold off seven figures’ worth of stock holdings in the days and weeks after a private, all-senators meeting on the novel coronavirus that subsequently hammered U.S. equities.

Sen. Kelly Loeffler (R-GA) reported the first sale of stock jointly owned by her and her husband on Jan. 24, the very day that her committee, the Senate Health Committee, hosted a private, all-senators briefing from administration officials, including the CDC director and Anthony Fauci, the head of the National Institute of Allergy and Infectious Diseases, on the coronavirus.

“Appreciate today’s briefing from the President’s top health officials on the novel coronavirus outbreak,” she tweeted about the briefing at the time.

That first transaction was a sale of stock in the company Resideo Technologies valued at between $50,001 and $100,000. The company’s stock price has fallen by more than half since then, and the Dow Jones Industrial Average overall has shed approximately 10,000 points, dropping about a third of its value.

It was the first of 29 stock transactions that Loeffler and her husband made through mid-February, all but two of which were sales. One of Loeffler’s two purchases was stock worth between $100,000 and $250,000 in Citrix, a technology company that offers teleworking software and which has seen a small bump in its stock price since Loeffler bought in as a result of coronavirus-induced market turmoil.

The info is here.

Monday, March 23, 2020

Burr moves to quell fallout from stock sales with request for Ethics probe

Richard BurrJack Brewster
politico.com
Originally posted 20 March 20

Sen. Richard Burr (R-N.C.) on Friday asked the Senate Ethics Committee to review stock sales he made weeks before the markets began to tank in response to the coronavirus pandemic — a move designed to limit the fallout from an intensifying political crisis.

Burr, who chairs the powerful Senate Intelligence Committee, defended the sales, saying he “relied solely on public news reports to guide my decision regarding the sale of stocks" and disputed the notion he used information that he was privy to during classified briefings on the novel coronavirus. Burr specifically name-checked CNBC’s daily health and science reporting from its Asia bureau.

“Understanding the assumption many could make in hindsight however, I spoke this morning with the chairman of the Senate Ethics Committee and asked him to open a complete review of the matter with full transparency,” Burr said in a statement.

Burr, who is retiring at the end of 2022, has faced calls to resign from across the ideological spectrum since ProPublica reported Thursday that he dumped between $628,000 and $1.72 million of his holdings on Feb. 13 in 33 different transactions — a week before the stock market began plummeting amid fears of the coronavirus spreading in the U.S.

The info is here.

Thursday, February 27, 2020

Liar, Liar, Liar

S. Vedantam, M. Penmann, & T. Boyle
Hidden Brain - NPR.org
Originally posted 17 Feb 20

When we think about dishonesty, we mostly think about the big stuff.

We see big scandals, big lies, and we think to ourselves, I could never do that. We think we're fundamentally different from Bernie Madoff or Tiger Woods.

But behind big lies are a series of small deceptions. Dan Ariely, a professor of psychology and behavioral economics at Duke University, writes about this in his book The Honest Truth about Dishonesty.

"One of the frightening conclusions we have is that what separates honest people from not-honest people is not necessarily character, it's opportunity," he said.

These small lies are quite common. When we lie, it's not always a conscious or rational choice. We want to lie and we want to benefit from our lying, but we want to be able to look in the mirror and see ourselves as good, honest people. We might go a little too fast on the highway, or pocket extra change at a gas station, but we're still mostly honest ... right?

That's why Ariely describes honesty as something of a state of mind. He thinks the IRS should have people sign a pledge committing to be honest when they start working on their taxes, not when they're done. Setting the stage for honesty is more effective than asking someone after the fact whether or not they lied.

The info is here.

There is a 30 minute audio file worth listening.

Wednesday, November 27, 2019

Corruption Is Contagious: Dishonesty begets dishonesty, rapidly spreading unethical behavior through a society

Dan Ariely & Ximena Garcia-Rada
Scientific American
September 2019

Here is an excerpt:

This is because social norms—the patterns of behavior that are accepted as normal—impact how people will behave in many situations, including those involving ethical dilemmas. In 1991 psychologists Robert B. Cialdini, Carl A. Kallgren and Raymond R. Reno drew the important distinction between descriptive norms—the perception of what most people do—and injunctive norms—the perception of what most people approve or disapprove of. We argue that both types of norms influence bribery.

Simply put, knowing that others are paying bribes to obtain preferential treatment (a descriptive norm) makes people feel that it is more acceptable to pay a bribe themselves.

Similarly, thinking that others believe that paying a bribe is acceptable (an injunctive norm) will make people feel more comfortable when accepting a bribe request. Bribery becomes normative, affecting people's moral character.

In 2009 Ariely, with behavioral researchers Francesca Gino and Shahar Ayal, published a study showing how powerful social norms can be in shaping dishonest behavior. In two lab studies, they assessed the circumstances in which exposure to others' unethical behavior would change someone's ethical decision-making. Group membership turned out to have a significant effect: When individuals observed an in-group member behaving dishonestly (a student with a T-shirt suggesting he or she was from the same school cheating in a test), they, too, behaved dishonestly. In contrast, when the person behaving dishonestly was an out-group member (a student with a T-shirt from the rival school), observers acted more honestly.

But social norms also vary from culture to culture: What is acceptable in one culture might not be acceptable in another. For example, in some societies giving gifts to clients or public officials demonstrates respect for a business relationship, whereas in other cultures it is considered bribery. Similarly, gifts for individuals in business relationships can be regarded either as lubricants of business negotiations, in the words of behavioral economists Michel André Maréchal and Christian Thöni, or as questionable business practices. And these expectations and rules about what is accepted are learned and reinforced by observation of others in the same group. Thus, in countries where individuals regularly learn that others are paying bribes to obtain preferential treatment, they determine that paying bribes is socially acceptable. Over time the line between ethical and unethical behavior becomes blurry, and dishonesty becomes the “way of doing business.”

The info is here.

Friday, July 19, 2019

Revisiting Morality In the Age of Dishonesty

Wim Laven
citywatchla.com
Originally posted June 27, 2019

If Donald Trump actually follows through on his recently tweeted promise that Immigrations and Customs Enforcement (ICE) “will begin deporting the millions of illegal aliens who have illicitly found their way into the United States … as fast as they come in,” what will you do?

According to the faith I was raised with I hope I would act according to the lessons found in the parable of the Good Samaritan. In the Gospel of Luke Jesus told of a traveler who was beaten, stripped, and left naked waiting for death. People who claimed to have good faith avoided this victim, but it was the Samaritan who stopped and rendered aid—a selfless act of altruism. Charity, compassion, and forgiveness are the highest values I was raised with. I do my best to dedicate myself to their service, and I’m sure I’m not the only one left in a bind: what will I do?

Recent stories tell of modern day Samaritans rendering aid to travelers (some seeking asylum, some trying to immigrate legally, some illegally…) at great risk. The case of Scott Warren in Arizona presents offering humanitarian aid as a crime punishable by up to 20 years in prison; but there is no verdict, the jury is hung. His specific crimes are putting out food and water, and pointing directions (actions consistent with No More Deaths, a part of the Unitarian Universalist Church of Tucson), which appears reflect values just like I was raised with. Do I have the strength to follow my religious convictions, even in the face of criminal prosecution like Warren has?

The info is here.

Friday, January 11, 2019

10 ways to detect health-care lies

Lawton R. Burns and Mark V. Pauly
thehill.com
Originally posted December 9, 2018

Here is an excerpt:

Why does this kind of behavior occur? While flat-out dishonesty for short-term financial gains is an obvious answer, a more common explanation is the need to say something positive when there is nothing positive to say.

This problem is acute in health care. Suppose you are faced with the assignment of solving the ageless dilemma of reducing costs while simultaneously raising quality of care. You could respond with a message of failure or a discussion of inevitable tradeoffs.

But you could also pick an idea with some internal plausibility and political appeal, fashion some careful but conditional language and announce the launch of your program. Of course, you will add that it will take a number of years before success appears, but you and your experts will argue for the idea in concept, with the details to be worked out later.

At minimum, unqualified acceptance of such proposed ideas, even (and especially) by apparently qualified people, will waste resources and will lead to enormous frustration for your audience of politicians and outraged critics of the current system. The incentives to generate falsehoods are not likely to diminish — if anything, rising spending and stagnant health outcomes strengthen them — so it is all the more important to have an accurate and fast way to detect and deter lies in health care.

The info is here.

Monday, December 17, 2018

How Wilbur Ross Lost Millions, Despite Flouting Ethics Rules

Dan Alexander
Forbes.com
Originally published December 14, 2018

Here is an excerpt:

By October 2017, Ross was out of time to divest. In his ethics agreement, he said he would get rid of the funds in the first 180 days after his confirmation—or if not, during a 60-day extension period. So on October 25, exactly 240 days after his confirmation, Ross sold part of his interests to funds managed by Goldman Sachs. Given that he waited until the last possible day to legally divest the assets, it seems certain that he ended up selling at a discount.

The very next day, on October 26, 2017, a reporter for the New York Times contacted Ross with a list of questions about his ties to Navigator, the Putin-linked company. Before the story was published, Ross took out a short position against Navigator—essentially betting that the company’s stock would go down. When the story finally came out, on November 5, 2017, the stock did not plummet initially, but it did creep down 4% by the time Ross closed the short position 11 days later, apparently bolstering his fortune by $3,000 to $10,000.

On November 1, 2017, the day after Ross shorted Navigator, he signed a sworn statement that he had divested everything he previously told federal ethics officials he would. But that was not true. In fact, Ross still owned more than $10 million worth of stock in Invesco, the parent company of his former private equity firm. The next month, he sold those shares, pocketing at least $1.2 million more than he would have if he sold when he first promised to.