Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Honesty. Show all posts
Showing posts with label Honesty. Show all posts

Tuesday, March 5, 2024

You could lie to a health chatbot – but it might change how you perceive yourself

Dominic Wilkinson
The Conversation
Originally posted 8 FEB 24

Here is an excerpt:

The ethics of lying

There are different ways that we can think about the ethics of lying.

Lying can be bad because it causes harm to other people. Lies can be deeply hurtful to another person. They can cause someone to act on false information, or to be falsely reassured.

Sometimes, lies can harm because they undermine someone else’s trust in people more generally. But those reasons will often not apply to the chatbot.

Lies can wrong another person, even if they do not cause harm. If we willingly deceive another person, we potentially fail to respect their rational agency, or use them as a means to an end. But it is not clear that we can deceive or wrong a chatbot, since they don’t have a mind or ability to reason.

Lying can be bad for us because it undermines our credibility. Communication with other people is important. But when we knowingly make false utterances, we diminish the value, in other people’s eyes, of our testimony.

For the person who repeatedly expresses falsehoods, everything that they say then falls into question. This is part of the reason we care about lying and our social image. But unless our interactions with the chatbot are recorded and communicated (for example, to humans), our chatbot lies aren’t going to have that effect.

Lying is also bad for us because it can lead to others being untruthful to us in turn. (Why should people be honest with us if we won’t be honest with them?)

But again, that is unlikely to be a consequence of lying to a chatbot. On the contrary, this type of effect could be partly an incentive to lie to a chatbot, since people may be conscious of the reported tendency of ChatGPT and similar agents to confabulate.


Here is my summary:

The article discusses the potential consequences of lying to a health chatbot, even though it might seem tempting. It highlights a situation where someone frustrated with a wait for surgery considers exaggerating their symptoms to a chatbot screening them.

While lying might offer short-term benefits like quicker attention, the author argues it could have unintended consequences:

Impact on healthcare:
  • Inaccurate information can hinder proper diagnosis and treatment.
  • It contributes to an already strained healthcare system.
Self-perception:
  • Repeatedly lying, even to a machine, can erode honesty and integrity.
  • It reinforces unhealthy avoidance of seeking professional help.
The article encourages readers to be truthful with chatbots for better healthcare outcomes and self-awareness. It acknowledges the frustration with healthcare systems but emphasizes the importance of transparency for both individual and collective well-being.

Sunday, March 3, 2024

Is Dan Ariely Telling the Truth?

Tom Bartlett
The Chronicle of Higher Ed
Originally posted 18 Feb 24

Here is an excerpt:

In August 2021, the blog Data Colada published a post titled “Evidence of Fraud in an Influential Field Experiment About Dishonesty.” Data Colada is run by three researchers — Uri Simonsohn, Leif Nelson, and Joe Simmons — and it serves as a freelance watchdog for the field of behavioral science, which has historically done a poor job of policing itself. The influential field experiment in question was described in a 2012 paper, published in the Proceedings of the National Academy of Sciences, by Ariely and four co-authors. In the study, customers of an insurance company were asked to report how many miles they had driven over a period of time, an answer that might affect their premiums. One set of customers signed an honesty pledge at the top of the form, and another signed at the bottom. The study found that those who signed at the top reported higher mileage totals, suggesting that they were more honest. The authors wrote that a “simple change of the signature location could lead to significant improvements in compliance.” The study was classic Ariely: a slight tweak to a system that yields real-world results.

But did it actually work? In 2020, an attempted replication of the effect found that it did not. In fact, multiple attempts to replicate the 2012 finding all failed (though Ariely points to evidence in a recent, unpublished paper, on which he is a co-author, indicating that the effect might be real). The authors of the attempted replication posted the original data from the 2012 study, which was then scrutinized by a group of anonymous researchers who found that the data, or some of it anyway, had clearly been faked. They passed the data along to the Data Colada team. There were multiple red flags. For instance, the number of miles customers said they’d driven was unrealistically uniform. About the same number of people drove 40,000 miles as drove 500 miles. No actual sampling would look like that — but randomly generated data would. Two different fonts were used in the file, apparently because whoever fudged the numbers wasn’t being careful.

In short, there is no doubt that the data were faked. The only question is, who did it?


This article discusses an investigation into the research conduct of Dr. Dan Ariely, a well-known behavioral economist at Duke University. The investigation, prompted by concerns about potential data fabrication, concluded that while no evidence of fabricated data was found, Ariely did commit research misconduct by failing to adequately vet findings and maintain proper records.

The article highlights several specific issues identified by the investigation, including inconsistencies in data and a lack of supporting documentation for key findings. It also mentions that Ariely made inaccurate statements about his personal history, such as misrepresenting his age at the time of a childhood accident.

While Ariely maintains that he did not intentionally fabricate data and attributes the errors to negligence and a lack of awareness, the investigation's findings have damaged his reputation and raised questions about the integrity of his research. The article concludes by leaving the reader to ponder whether Ariely's transgressions can be forgiven or if they represent a deeper pattern of dishonesty.

It's important to note that the article presents one perspective on a complex issue and doesn't offer definitive answers. Further research and analysis are necessary to form a complete understanding of the situation.

Wednesday, February 28, 2024

Scientists are on the verge of a male birth-control pill. Will men take it?

Jill Filipovic
The Guardian
Originally posted 18 Dec 23

Here is an excerpt:

The overwhelming share of responsibility for preventing pregnancy has always fallen on women. Throughout human history, women have gone to great lengths to prevent pregnancies they didn’t want, and end those they couldn’t prevent. Safe and reliable contraceptive methods are, in the context of how long women have sought to interrupt conception, still incredibly new. Measured by the lifespan of anyone reading this article, though, they are well established, and have for many decades been a normal part of life for millions of women around the world.

To some degree, and if only for obvious biological reasons, it makes sense that pregnancy prevention has historically fallen on women. But it also, as they say, takes two to tango – and only one of the partners has been doing all the work. Luckily, things are changing: thanks to generations of women who have gained unprecedented freedoms and planned their families using highly effective contraception methods, and thanks to men who have shifted their own gender expectations and become more involved partners and fathers, women and men have moved closer to equality than ever.

Among politically progressive couples especially, it’s now standard to expect that a male partner will do his fair share of the household management and childrearing (whether he actually does is a separate question, but the expectation is there). What men generally cannot do, though, is carry pregnancies and birth babies.


Here are some themes worthy of discussion:

Shifting responsibility: The potential availability of a reliable male contraceptive marks a significant departure from the historical norm where the burden of pregnancy prevention was primarily borne by women. This shift raises thought-provoking questions that delve into various aspects of societal dynamics.

Gender equality: A crucial consideration is whether men will willingly share responsibility for contraception on an equal footing, or whether societal norms will continue to exert pressure on women to take the lead in this regard.

Reproductive autonomy: The advent of accessible male contraception prompts contemplation on whether it will empower women to exert greater control over their reproductive choices, shaping the landscape of family planning.

Informed consent: An important facet of this shift involves how men will be informed about potential side effects and risks associated with the male contraceptive, particularly in comparison to existing female contraceptives.

Accessibility and equity: Concerns emerge regarding equitable access to the male contraceptive, particularly for marginalized communities. Questions arise about whether affordable and culturally appropriate access will be universally available, regardless of socioeconomic status or geographic location.

Coercion: There is a potential concern that the availability of a male contraceptive might be exploited to coerce women into sexual activity without their full and informed consent.

Psychological and social impact: The introduction of a male contraceptive brings with it potential psychological and social consequences that may not be immediately apparent.

Changes in sexual behavior: The availability of a male contraceptive may influence sexual practices and attitudes towards sex, prompting a reevaluation of societal norms.

Impact on relationships: The shift in responsibility for contraception could potentially cause tension or conflict in existing relationships as couples navigate the evolving dynamics.

Masculinity and stigma: The use of a male contraceptive may challenge traditional notions of masculinity, possibly leading to social stigma that individuals using the contraceptive may face.

Friday, February 9, 2024

The Dual-Process Approach to Human Sociality: Meta-analytic evidence for a theory of internalized heuristics for self-preservation

Capraro, Valerio (May 8, 2023).
Journal of Personality and Social Psychology, 

Abstract

Which social decisions are influenced by intuitive processes? Which by deliberative processes? The dual-process approach to human sociality has emerged in the last decades as a vibrant and exciting area of research. Yet, a perspective that integrates empirical and theoretical work is lacking. This review and meta-analysis synthesizes the existing literature on the cognitive basis of cooperation, altruism, truth-telling, positive and negative reciprocity, and deontology, and develops a framework that organizes the experimental regularities. The meta-analytic results suggest that intuition favours a set of heuristics that are related to the instinct for self-preservation: people avoid being harmed, avoid harming others (especially when there is a risk of harm to themselves), and are averse to disadvantageous inequalities. Finally, this paper highlights some key research questions to further advance our understanding of the cognitive foundations of human sociality.

Here is my summary:

This article proposes a dual-process approach to human sociality.  Capraro argues that there are two main systems that govern human social behavior: an intuitive system and a deliberative system. The intuitive system is fast, automatic, and often based on heuristics, or mental shortcuts. The deliberative system is slower, more effortful, and based on a more careful consideration of the evidence.

Capraro argues that the intuitive system plays a key role in cooperation, altruism, truth-telling, positive and negative reciprocity, and deontology. This is because these behaviors are often necessary for self-preservation. For example, in order to avoid being harmed, people are naturally inclined to cooperate with others and avoid harming others. Similarly, in order to maintain positive relationships with others, people are inclined to be truthful and reciprocate favors.

The deliberative system plays a more important role in more complex social situations, such as when people need to make decisions that have long-term consequences or when they need to take into account the needs of others. In these cases, people are more likely to engage in careful consideration of the evidence and to weigh the different options before making a decision. The authors conclude that the dual-process approach to human sociality provides a framework for understanding the complex cognitive basis of human social behavior. This framework can be used to explain a wide range of social phenomena, from cooperation and altruism to truth-telling and deontology.

Monday, October 9, 2023

They Studied Dishonesty. Was Their Work a Lie?

Gideon Lewis-Kraus
The New Yorker
Originally published 30 Sept 23

Here is an excerpt:

Despite a good deal of readily available evidence to the contrary, neoclassical economics took it for granted that humans were rational. Kahneman and Tversky found flaws in this assumption, and built a compendium of our cognitive biases. We rely disproportionately on information that is easily retrieved: a recent news article about a shark attack seems much more relevant than statistics about how rarely such attacks actually occur. Our desires are in flux—we might prefer pizza to hamburgers, and hamburgers to nachos, but nachos to pizza. We are easily led astray by irrelevant details. In one experiment, Kahneman and Tversky described a young woman who had studied philosophy and participated in anti-nuclear demonstrations, then asked a group of participants which inference was more probable: either “Linda is a bank teller” or “Linda is a bank teller and is active in the feminist movement.” More than eighty per cent chose the latter, even though it is a subset of the former. We weren’t Homo economicus; we were giddy and impatient, our thoughts hasty, our actions improvised. Economics tottered.

Behavioral economics emerged for public consumption a generation later, around the time of Ariely’s first book. Where Kahneman and Tversky held that we unconsciously trick ourselves into doing the wrong thing, behavioral economists argued that we might, by the same token, be tricked into doing the right thing. In 2008, Richard Thaler and Cass Sunstein published “Nudge,” which argued for what they called “libertarian paternalism”—the idea that small, benign alterations of our environment might lead to better outcomes. When employees were automatically enrolled in 401(k) programs, twice as many saved for retirement. This simple bureaucratic rearrangement improved a great many lives.

Thaler and Sunstein hoped that libertarian paternalism might offer “a real Third Way—one that can break through some of the least tractable debates in contemporary democracies.” Barack Obama, who hovered above base partisanship, found much to admire in the promise of technocratic tinkering. He restricted his outfit choices mostly to gray or navy suits, based on research into “ego depletion,” or the concept that one might exhaust a given day’s reservoir of decision-making energy. When, in the wake of the 2008 financial crisis, Obama was told that money “framed” as income was more likely to be spent than money framed as wealth, he enacted monthly tax deductions instead of sending out lump-sum stimulus checks. He eventually created a behavioral-sciences team in the White House. (Ariely had once found that our decisions in a restaurant are influenced by whoever orders first; it’s possible that Obama was driven by the fact that David Cameron, in the U.K., was already leaning on a “nudge unit.”)

The nudge, at its best, was modest—even a minor potential benefit at no cost pencilled out. In the Obama years, a pop-up on computers at the Department of Agriculture reminded employees that single-sided printing was a waste, and that advice reduced paper use by six per cent. But as these ideas began to intermingle with those in the adjacent field of social psychology, the reasonable notion that some small changes could have large effects at scale gave way to a vision of individual human beings as almost boundlessly pliable. Even Kahneman was convinced. He told me, “People invented things that shouldn’t have worked, and they were working, and I was enormously impressed by it.” Some of these interventions could be implemented from above. 


Saturday, September 10, 2022

Social norms and dishonesty across societies

Aycinena, D., et al.
PNAS, 119 (31), 2022.

Abstract

Social norms have long been recognized as an important factor in curtailing antisocial behavior, and stricter prosocial norms are commonly associated with increased prosocial behavior. In this study, we provide evidence that very strict prosocial norms can have a perverse negative relationship with prosocial behavior. In laboratory experiments conducted in 10 countries across 5 continents, we measured the level of honest behavior and elicited injunctive norms of honesty. We find that individuals who hold very strict norms (i.e., those who perceive a small lie to be as socially unacceptable as a large lie) are more likely to lie to the maximal extent possible. This finding is consistent with a simple behavioral rationale. If the perceived norm does not differentiate between the severity of a lie, lying to the full extent is optimal for a norm violator since it maximizes the financial gain, while the perceived costs of the norm violation are unchanged. We show that the relation between very strict prosocial norms and high levels of rule violations generalizes to civic norms related to common moral dilemmas, such as tax evasion, cheating on government benefits, and fare dodging on public transportation. Those with very strict attitudes toward civic norms are more likely to lie to the maximal extent possible. A similar relation holds across countries. Countries with a larger fraction of people with very strict attitudes toward civic norms have a higher society-level prevalence of rule violations.

Significance

Much of the research in the experimental and behavioral sciences finds that stronger prosocial norms lead to higher levels of prosocial behavior. Here, we show that very strict prosocial norms are negatively correlated with prosocial behavior. Using laboratory experiments on honesty, we demonstrate that individuals who hold very strict norms of honesty are more likely to lie to the maximal extent. Further, countries with a larger fraction of people with very strict civic norms have proportionally more societal-level rule violations. We show that our findings are consistent with a simple behavioral rationale. If perceived norms are so strict that they do not differentiate between small and large violations, then, conditional on a violation occurring, a large violation is individually optimal.


In essence, very strict social norms can backfire.  People can lie to the fullest extent with similar costs to minimal lying.

Thursday, September 1, 2022

When does moral engagement risk triggering a hypocrite penalty?

Jordan, J. & Sommers, R.
Current Opinion in Psychology
Volume 47, October 2022, 101404

Abstract

Society suffers when people stay silent on moral issues. Yet people who engage morally may appear hypocritical if they behave imperfectly themselves. Research reveals that hypocrites can—but do not always—trigger a “hypocrisy penalty,” whereby they are evaluated as more immoral than ordinary (non-hypocritical) wrongdoers. This pattern reflects that moral engagement can confer reputational benefits, but can also carry reputational costs when paired with inconsistent moral conduct. We discuss mechanisms underlying these costs and benefits, illuminating when hypocrisy is (and is not) evaluated negatively. Our review highlights the role that dishonesty and other factors play in engendering disdain for hypocrites, and offers suggestions for how, in a world where nobody is perfect, people can engage morally without generating backlash.

Conclusion: how to walk the moral tightrope

To summarize, hypocrites can—but do not always—incur a “hypocrisy penalty,” whereby they are evaluated more negatively than they would have been absent engaging. As this review has suggested, when observers scrutinize hypocritical moral engagement, they seem to ask at least three questions. First, does the actor signal to others, through his engagement, that he behaves more morally than he actually does? Second, does the actor, by virtue of his engagement, see himself as more moral than he really is? And third, is the actor's engagement preventing others from reaping benefits that he has already enjoyed? Evidence suggests that hypocritical moral engagement is more likely to carry reputational costs when the answer to these questions is “yes.” At the same time, observers do not seem to reliably impose a hypocrisy penalty just because the transgressions of hypocrites constitute personal moral failings—even as these failings convey weakness of will, highlight inconsistency with the actor's personal values, and reveal that the actor has knowingly done something that she believes to be wrong.

In a world where nobody is perfect, then, how can one engage morally while limiting the risk of subsequently being judged negatively as a hypocrite? We suggest that the answer comes down to two key factors: maximizing the reputational benefits that flow directly from one's moral engagement, and minimizing the reputational costs that flow from the combination of one's engagement and imperfect track record. While more research is needed, here we draw on the mechanisms we have reviewed to highlight four suggestions for those seeking to walk the moral tightrope.

Tuesday, August 9, 2022

You can handle the truth: Mispredicting the consequences of honest communication

Levine, E. E., & Cohen, T. R. (2018).
Journal of Experimental Psychology: General, 
147(9), 1400–1429. 

Abstract

People highly value the moral principle of honesty, and yet, they often avoid being honest with others. One reason people may avoid being completely honest is that honesty frequently conflicts with kindness: candidly sharing one’s opinions and feelings can hurt others and create social tension. In the present research, we explore the actual and predicted consequences of communicating honestly during difficult conversations. We compare honest communication to kind communication as well as a neutral control condition by randomly assigning individuals to be honest, kind, or conscious of their communication in every conversation with every person in their life for three days. We find that people significantly mispredict the consequences of communicating honestly: the experience of being honest is far more pleasurable, leads to greater levels of social connection, and does less relational harm than individuals expect. We establish these effects across two field experiments and two prediction experiments and we document the robustness of our results in a subsequent laboratory experiment. We explore the underlying mechanisms by qualitatively coding participants’ reflections during and following our experiments. This research contributes to our understanding of affective forecasting processes and uncovers fundamental insights on how communication and moral values shape well-being.

From the Discussion section

Our findings make several important contributions to our understanding of morality, affective forecasting, and human communication. First, we provide insight into why people avoid being honest with others. Our results suggest that individuals’ aversion to honesty is driven by a forecasting failure: Individuals expect honesty to be less pleasant and less socially connecting than it is. Furthermore, our studies suggest this is driven by individuals’ misguided fear of social rejection. Whereas prior work on mispredictions of social interactions has primarily examined how individuals misunderstand others or their preferences for interaction, the present research examines how individuals misunderstand others’ reactions to honest disclosure of thoughts and feelings, and how this shapes social communication.

Second, this research documents the broader consequences of being honest. Individuals’ predictions that honest communication would be less enjoyable and socially connecting than kind communication or one’s baseline communication were generally wrong. In the field experiment (Study 1a), participants in the honesty condition either felt similar or higher levels of social connection relative to participants in the kindness and control conditions. Participants in the honesty condition also derived greater long-term hedonic well-being and greater relational improvements relative to participants in the control condition. Furthermore, participants in Study 2 reported increased meaning in their life one week after engaging in their brief, but intense, honest conversation. Scholars have long claimed that morality promotes well-being, but to our knowledge, this is the first research to document how enacting specific moral principles promote different types of well-being.

Taken together, these findings suggest that individuals’ avoidance of honesty may be a mistake. By avoiding honesty, individuals miss out on opportunities that they appreciate in the long-run, and that they would want to repeat. Individuals’ choices about how to behave – in this case, whether or not to communicate honestly – seem to be driven primarily by expectations of enjoyment, but appreciation for these behaviors is driven by the experience of meaning. We encourage future research to further examine how affective forecasting failures may prevent individuals from finding meaning in their lives.

See the link above to the research.

Wednesday, February 3, 2021

Research on Non-verbal Signs of Lies and Deceit: A Blind Alley

T. Brennen & S. Magnussen
Front. Psychol., 14 December 2020

Introduction

Research on the detection of lies and deceit has a prominent place in the field of psychology and law with a substantial research literature published in this field of inquiry during the last five to six decades (Vrij, 2000, 2008; Vrij et al., 2019). There are good reasons for this interest in lie detection. We are all everyday liars, some of us more prolific than others, we lie in personal and professional relationships (Serota et al., 2010; Halevy et al., 2014; Serota and Levine, 2015; Verigin et al., 2019), and lying in public by politicians and other public figures has a long and continuing history (Peters, 2015). However, despite the personal problems that serious everyday lies may cause and the human tragedies political lies may cause, it is lying in court that appears to have been the principal initial motivation for the scientific interest in lie detection.

Lying in court is a threat to fair trials and the rule of law. Lying witnesses may lead to the exoneration of guilty persons or to the conviction of innocent ones. In the US it is well-documented that innocent people have been convicted because witnesses were lying in court (Garrett, 2010, 2011; www.innocenceproject.com). In evaluating the reliability and the truthfulness of a testimony, the court considers other evidence presented to the court, the known facts about the case and the testimonies by other witnesses. Inconsistency with the physical evidence or the testimonies of other witnesses might indicate that the witness is untruthful, or it may simply reflect the fact that the witness has observed, interpreted, and later remembered the critical events incorrectly—normal human errors all too well known in the eyewitness literature (Loftus, 2005; Wells and Loftus, 2013; Howe and Knott, 2015).

(as it ends)

Is the rational course simply to drop this line of research? We believe it is. The creative studies carried out during the last few decades have been important in showing that psychological folklore, the ideas we share about behavioral signals of lies and deceit are not correct. This debunking function of science is extremely important. But we have now sufficient evidence that there are no specific non-verbal behavioral signals that accompany lying or deceitful behavior. We can safely recommend that courts disregard such behavioral signals when appraising the credibility of victims, witnesses, and suspected offenders. For psychology and law researchers it may be time to move on.

Thursday, August 27, 2020

Patients aren’t being told about the AI systems advising their care

Rebecca Robbins and Erin Brodwin
statnews.com
Originally posted 15 July 20

Here is an excerpt:

The decision not to mention these systems to patients is the product of an emerging consensus among doctors, hospital executives, developers, and system architects, who see little value — but plenty of downside — in raising the subject.

They worry that bringing up AI will derail clinicians’ conversations with patients, diverting time and attention away from actionable steps that patients can take to improve their health and quality of life. Doctors also emphasize that they, not the AI, make the decisions about care. An AI system’s recommendation, after all, is just one of many factors that clinicians take into account before making a decision about a patient’s care, and it would be absurd to detail every single guideline, protocol, and data source that gets considered, they say.

Internist Karyn Baum, who’s leading M Health Fairview’s rollout of the tool, said she doesn’t bring up the AI to her patients “in the same way that I wouldn’t say that the X-ray has decided that you’re ready to go home.” She said she would never tell a fellow clinician not to mention the model to a patient, but in practice, her colleagues generally don’t bring it up either.

Four of the health system’s 13 hospitals have now rolled out the hospital discharge planning tool, which was developed by the Silicon Valley AI company Qventus. The model is designed to identify hospitalized patients who are likely to be clinically ready to go home soon and flag steps that might be needed to make that happen, such as scheduling a necessary physical therapy appointment.

Clinicians consult the tool during their daily morning huddle, gathering around a computer to peer at a dashboard of hospitalized patients, estimated discharge dates, and barriers that could prevent that from occurring on schedule.

The info is here.

Friday, May 1, 2020

The therapist's dilemma: Tell the whole truth?

Image result for psychotherapyJackson, D.
J. Clin. Psychol. 2020; 76: 286– 291.
https://doi.org/10.1002/jclp.22895

Abstract

Honest communication between therapist and client is foundational to good psychotherapy. However, while past research has focused on client honesty, the topic of therapist honesty remains almost entirely untouched. Our lab's research seeks to explore the role of therapist honesty, how and why therapists make decisions about when to be completely honest with clients (and when to abstain from telling the whole truth), and the perceived consequences of these decisions. This article reviews findings from our preliminary research, presents a case study of the author's honest disclosure dilemma, and discusses the role of therapeutic tact and its function in the therapeutic process.

Here is an excerpt:

Based on our preliminary research, one of the most common topics of overt dishonesty among therapists was their feelings of frustration or disappointment toward their clients. For example, a therapist working with a client with a diagnosis of avoidant personality disorder may find herself increasingly frustrated by the client’s continual resistance to discussing emotional topics or engaging in activities that would broaden his or her world. Such a client —let’s assume male—is also likely to feel preoccupied with concerns about whether the therapist “likes” him or feels as frustrated with him as he does with himself. Should this client apologize for his behavior and ask if the therapist is frustrated with him, the therapist may feel compelled to reduce the discomfort he is already experiencing by dispelling his concern: “No, it’s okay, I’m not frustrated.”

But either at this moment or at a later point in therapy, once rapport (i.e., the therapeutic alliance) has been more firmly established, a more honest answer to this question might be fruitful: “Yes, I am feeling frustrated that we haven’t been able to find ways for you to implement the changes we discuss here, outside of session. How does it feel for you to hear that I am feeling frustrated?” Or, arguably, an even more honest answer: “Yes, I am sometimes frustrated. I sometimes think we could go deeper here—I think it’d be helpful.” Or, an honest answer that is somewhat less critical of the patient and more self‐focused: “I do feel frustrated that I haven’t been able to be more helpful.” Clearly, there are many ways for a therapist to be honest and/or dishonest, and there are also gradations in whichever direction a therapist chooses.

Thursday, April 30, 2020

Difficult Conversations: Navigating the Tension between Honesty and Benevolence

E. Levine, A. Roberts, & T. Cohen
PsyArXiv
Originally published 18 Jul 19

Abstract

Difficult conversations are a necessary part of everyday life. To help children, employees, and partners learn and improve, parents, managers, and significant others are frequently tasked with the unpleasant job of delivering negative news and critical feedback. Despite the long-term benefits of these conversations, communicators approach them with trepidation, in part, because they perceive them as involving intractable moral conflict between being honest and being kind. In this article, we review recent research on egocentrism, ethics, and communication to explain why communicators overestimate the degree to which honesty and benevolence conflict during difficult conversations, document the conversational missteps people make as a result of this erred perception, and propose more effective conversational strategies that honor the long-term compatibility of honesty and benevolence. This review sheds light on the psychology of moral tradeoffs in conversation, and provides practical advice on how to deliver unpleasant information in ways that improve recipients’ welfare.

From the Summary:

Difficult conversations that require the delivery of negative information from communicators to targets involve perceived moral conflict between honesty and benevolence. We suggest that communicators exaggerate this conflict. By focusing on the short-term harm and unpleasantness associated with difficult conversations, communicators fail to realize that honesty and benevolence are actually compatible in many cases. Providing honest feedback can help a target to learn and grow, thereby improving the target’s overall welfare. Rather than attempting to resolve the honesty-benevolence dilemma via communication strategies that focus narrowly on the short-term conflict between honesty and emotional harm, we recommend that communicators instead invoke communication strategies that integrate and maximize both honesty and benevolence to ensure that difficult conversations lead to long-term welfare improvements for targets. Future research should explore the traits, mindsets, and contexts that might facilitate this approach. For example, creative people may be more adept at integrative solutions to the perceived honesty-dilemma conflict, and people who are less myopic and more cognizant of the future consequences of their choices may be better at recognizing the long-term benefits of honesty.

The info is here.

This research has relevance to psychotherapy.

Tuesday, March 3, 2020

The lesser of two evils: Explaining a bad choice by revealing the choice set

Andras Molnar & Shereen J. Chaudhry
PsyArXiv
Last edited 4 Feb 20

Abstract

Making the right choice does not always lead to a good outcome—sometimes there are only bad outcomes to choose from. Situations like this are likely to lead others to misunderstand the decision maker’s intentions. However, simply revealing the choice set could set the record straight. Are decision-makers intrinsically driven to fix this misjudgment? If so, why, and what is the effect on the audience? Previous studies could not examine this desire to be understood because the research designs used did not isolate the decision to reveal information from the original choice. In two experiments (N=448 pairs), we address this gap in the literature and show that people are willing to pay ex post to reveal their choice set to the person who was negatively affected by their decision (the recipient), even after a one-shot anonymous interaction with no reputational consequences, and in some cases even when doing so reveals their selfish intentions. We find that this revealing behavior is effective at improving recipients’ rating of their outcome when it signals generous intentions, but not when it signals selfish intentions. It follows that the choice to reveal is driven by concern for the thoughts and feelings of strangers, but only when revealing signals generous intentions; those who reveal a choice that appears selfish report doing so out of a desire to be and/or appear honest. Individual differences in the drive to reveal cannot be explained by selection effects or mistakes in predicting the observer’s reaction. Thus, we find that people are intrinsically (i.e., even in one-shot anonymous settings) driven to correct a misunderstanding of their intentions, but they may do so for a variety of reasons, not all of which are self-enhancing. And though some people leave a misunderstanding in place when it is self-enhancing to do so, almost no one is willing to create a misunderstanding (by hiding the other option), even when it could conceal selfish behavior.

The research is here.

Thursday, February 27, 2020

Liar, Liar, Liar

S. Vedantam, M. Penmann, & T. Boyle
Hidden Brain - NPR.org
Originally posted 17 Feb 20

When we think about dishonesty, we mostly think about the big stuff.

We see big scandals, big lies, and we think to ourselves, I could never do that. We think we're fundamentally different from Bernie Madoff or Tiger Woods.

But behind big lies are a series of small deceptions. Dan Ariely, a professor of psychology and behavioral economics at Duke University, writes about this in his book The Honest Truth about Dishonesty.

"One of the frightening conclusions we have is that what separates honest people from not-honest people is not necessarily character, it's opportunity," he said.

These small lies are quite common. When we lie, it's not always a conscious or rational choice. We want to lie and we want to benefit from our lying, but we want to be able to look in the mirror and see ourselves as good, honest people. We might go a little too fast on the highway, or pocket extra change at a gas station, but we're still mostly honest ... right?

That's why Ariely describes honesty as something of a state of mind. He thinks the IRS should have people sign a pledge committing to be honest when they start working on their taxes, not when they're done. Setting the stage for honesty is more effective than asking someone after the fact whether or not they lied.

The info is here.

There is a 30 minute audio file worth listening.

Monday, January 27, 2020

Nurses Continue to Rate Highest in Honesty, Ethics

Nurses Continue to Rate Highest in Honesty, EthicsRJ Reinhart
news.gallup.com
Originally posted 6 Jan 20

For the 18th year in a row, Americans rate the honesty and ethics of nurses highest among a list of professions that Gallup asks U.S. adults to assess annually. Currently, 85% of Americans say nurses' honesty and ethical standards are "very high" or "high," essentially unchanged from the 84% who said the same in 2018. Alternatively, Americans hold car salespeople in the lowest esteem, with 9% saying individuals in this field have high levels of ethics and honesty, similar to the 8% who said the same in 2018.

Nurses are consistently rated higher in honesty and ethics than all other professions that Gallup asks about, by a wide margin. Medical professions in general rate highly in Americans' assessments of honesty and ethics, with at least six in 10 U.S. adults saying medical doctors, pharmacists and dentists have high levels of these virtues. The only nonmedical profession that Americans now hold in a similar level of esteem is engineers, with 66% saying individuals in this field have high levels of honesty and ethics.

Americans' high regard for healthcare professionals contrasts sharply with their assessments of stockbrokers, advertising professionals, insurance salespeople, senators, members of Congress and car salespeople -- all of which garner less than 20% of U.S. adults saying they have high levels of honesty and ethics.

The public's low levels of belief in the honesty and ethical standards of senators and members of Congress may be a contributing factor in poor job approval ratings for the legislature. No more than 30% of Americans have approved of Congress in the past 10 years.

The info is here.

Thursday, July 11, 2019

Civic honesty around the globe

Alain Cohn, Michel André Maréchal, David Tannenbaum, & Christian Lukas Zünd
Science  20 Jun 2019:
DOI: 10.1126/science.aau8712

Abstract

Civic honesty is essential to social capital and economic development, but is often in conflict with material self-interest. We examine the trade-off between honesty and self-interest using field experiments in 355 cities spanning 40 countries around the globe. We turned in over 17,000 lost wallets with varying amounts of money at public and private institutions, and measured whether recipients contacted the owner to return the wallets. In virtually all countries citizens were more likely to return wallets that contained more money. Both non-experts and professional economists were unable to predict this result. Additional data suggest our main findings can be explained by a combination of altruistic concerns and an aversion to viewing oneself as a thief, which increase with the material benefits of dishonesty.

Here is the conclusion:

Our findings also represent a unique data set for examining cross-country differences in civic honesty. Honesty is a key component of social capital, and here we provide an objective measure to supplement the large body of work that has traditionally examined social capital using subjective survey measures. Using average response rates across countries, we find substantial variation in rates of civic honesty, ranging from 14% to 76%. This variation largely persists even when controlling for a country’s gross domestic product, suggesting that other factors besides country wealth are also at play. In the supplementary materials, we provide an analysis suggesting that economically favorable geographic conditions, inclusive political institutions, national education, and cultural values that emphasize moral norms extending beyond one’s in-group are also positively associated with rates of civic honesty. Future research is needed to identify how these and other factors may contribute to societal differences in honest behavior.

The research is here.

Tuesday, April 23, 2019

4 Ways Lying Becomes the Norm at a Company

Ron Carucci
Harvard Business Review
Originally published February 15, 2019

Many of the corporate scandals in the past several years — think Volkswagen or Wells Fargo — have been cases of wide-scale dishonesty. It’s hard to fathom how lying and deceit permeated these organizations. Some researchers point to group decision-making processes or psychological traps that snare leaders into justification of unethical choices. Certainly those factors are at play, but they largely explain dishonest behavior at an individual level and I wondered about systemic factors that might influence whether or not people in organizations distort or withhold the truth from one another.

This is what my team set out to understand through a 15-year longitudinal study. We analyzed 3,200 interviews that were conducted as part of 210 organizational assessments to see whether there were factors that predicted whether or not people inside a company will be honest. Our research yielded four factors — not individual character traits, but organizational issues — that played a role. The good news is that these factors are completely within a corporation’s control and improving them can make your company more honest, and help avert the reputation and financial disasters that dishonesty can lead to.

The stakes here are high. Accenture’s Competitive Agility Index — a 7,000-company, 20-industry analysis, for the first time tangibly quantified how a decline in stakeholder trust impacts a company’s financial performance. The analysis reveals more than half (54%) of companies on the index experienced a material drop in trust — from incidents such as product recalls, fraud, data breaches and c-suite missteps — which equates to a minimum of $180 billion in missed revenues. Worse, following a drop in trust, a company’s index score drops 2 points on average, negatively impacting revenue growth by 6% and EBITDA by 10% on average.

The info is here.

Tuesday, February 12, 2019

How to tell the difference between persuasion and manipulation

Robert Noggle
aeon.co
Originally published August 1, 2018

Here is an excerpt:

It appears, then, that whether an influence is manipulative depends on how it is being used. Iago’s actions are manipulative and wrong because they are intended to get Othello to think and feel the wrong things. Iago knows that Othello has no reason to be jealous, but he gets Othello to feel jealous anyway. This is the emotional analogue to the deception that Iago also practises when he arranges matters (eg, the dropped handkerchief) to trick Othello into forming beliefs that Iago knows are false. Manipulative gaslighting occurs when the manipulator tricks another into distrusting what the manipulator recognises to be sound judgment. By contrast, advising an angry friend to avoid making snap judgments before cooling off is not acting manipulatively, if you know that your friend’s judgment really is temporarily unsound. When a conman tries to get you to feel empathy for a non-existent Nigerian prince, he acts manipulatively because he knows that it would be a mistake to feel empathy for someone who does not exist. Yet a sincere appeal to empathy for real people suffering undeserved misery is moral persuasion rather than manipulation. When an abusive partner tries to make you feel guilty for suspecting him of the infidelity that he just committed, he is acting manipulatively because he is trying to induce misplaced guilt. But when a friend makes you feel an appropriate amount of guilt over having deserted him in his hour of need, this does not seem manipulative.

The info is here.

Wednesday, January 16, 2019

What Is the Right to Privacy?

Andrei Marmor
(2015) Philosophy & Public Affairs, 43, 1, pp 3-26

The right to privacy is a curious kind of right. Most people think that we have a general right to privacy. But when you look at the kind of issues that lawyers and philosophers label as concerns about privacy, you see widely differing views about the scope of the right and the kind of cases that fall under its purview.1 Consequently, it has become difficult to articulate the underlying interest that the right to privacy is there to protect—so much so that some philosophers have come to doubt that there is any underlying interest protected by it. According to Judith Thomson, for example, privacy is a cluster of derivative rights, some of them derived from rights to own or use your property, others from the right to your person or your right to decide what to do with your body, and so on. Thomson’s position starts from a sound observation, and I will begin by explaining why. The conclusion I will reach, however, is very different. I will argue that there is a general right to privacy grounded in people’s interest in having a reasonable measure of control over the ways in which they can present themselves (and what is theirs) to others. I will strive to show that this underlying interest justifies the right to privacy and explains its proper scope, though the scope of the right might be narrower, and fuzzier in its boundaries, than is commonly understood.

The info is here.

Tuesday, October 23, 2018

James Gunn's Firing Is What Happens When We Outsource Morality to Capitalism

Anhar Karim
Forbes.com
Originally posted September 16, 2018

Here is an excerpt:

A study last year from Cone Communications found that 87% of consumers said they’d purchase a company’s product if said company showed that they cared about issues consumers cared about. On the flip side of that, 75% of consumers said they would not buy from a company which showed they did not care. If business executives and CEOs are following along, as they surely are, the lesson is this: If a company wants to stay on top in the modern age, and if they want to maximize their profits, then they need to beat their competitors not only with superior products but also with demonstrated, superior moral behavior.

This, on its face, does not appear horrible. Indeed, this new development has led to a lot of undeniable good. It’s this idea that gave the #MeToo movement its bite and toppled industry giants such as Harvey Weinstein, Kevin Spacey and Les Moonves. It’s this strategy that’s led Warner Brothers to mandate an inclusion rider, Sony to diversify their comic titles, and Marvel to get their heroes to visit children in hospitals.

So how could any of this be negative?

Well, consider the other side of these attempts at corporate responsibility, the efforts that look good but help no one. What am I talking about? Consider that we recently had a major movie with a song celebrating difference and being true to yourself. That sounds good. However, the plot of the film is actually about exploiting minorities for profit. So it falls flat. Or consider that we had a woman cast in a Marvel franchise playing a role normally reserved for a man. Sounds progressive, right? Until we realize that that is also an example of a white actor trying her best to look Asian and thus limiting diversity. Also, consider that Sony decided to try and help fight back against bullying. Noble intent, but the way they went about it? They helped put up posters oddly suggesting that bullying could be stopped with sending positive emojis. Again, all of these sound sort of good on paper, but in practice, they help no one.

The info is here.