Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Communication. Show all posts
Showing posts with label Communication. Show all posts

Wednesday, February 14, 2024

Responding to Medical Errors—Implementing the Modern Ethical Paradigm

T. H. Gallagher &  A. Kachalia
The New England Journal of Medicine
January 13, 2024
DOI: 10.1056/NEJMp2309554

Here are some excerpts:

Traditionally, recommendations regarding responding to medical errors focused mostly on whether to disclose mistakes to patients. Over time, empirical research, ethical analyses, and stakeholder engagement began to inform expectations - which are now embodied in communication and resolution programs (CRPS) — for how health care professionals and organizations should respond not just to errors but any time patients have been harmed by medical care (adverse events). CRPs require several steps: quickly detecting adverse events, communicating openly and empathetically with patients and families about the event, apologizing and taking responsibility for errors, analyzing events and redesigning processes to prevent recurrences, supporting patients and clinicians, and proactively working with patients toward reconciliation. In this modern ethical paradigm, any time harm occurs, clinicians and health care organizations are accountable for minimizing suffering and promoting learning. However, implementing this ethical paradigm is challenging, especially when the harm was due to an error.

Historically, the individual physician was deemed the "captain of the ship," solely accountable for patient outcomes. Bioethical analyses emphasized the fiduciary nature of the doctor-patient relationship (i.e., doctors are in a position of greater knowledge and power) and noted that telling patients...about harmful errors supported patient autonomy and facilitated informed consent for future decisions. However, under U.S. tort law, physicians and organizations can be held accountable and financially liable for damages when they make negligent errors. As a result, ethical recommendations for openness were drowned out by fears of lawsuits and payouts, leading to a "deny and defend" response. Several factors initiated a paradigm shift. In the early 2000s, reports from the Institute of Medicine transformed the way the health care profession conceptualized patient safety.1 The imperative became creating cultures of safety that encouraged everyone to report errors to enable learning and foster more reliable systems. Transparency assumed greater importance, since you cannot fix problems you don't know about. The ethical imperative for openness was further supported when rising consumerism made it clear that patients expected responses to harm to include disclosure of what happened, an apology, reconciliation, and organizational learning.

(cut)

CRP Model for Responding to Harmful Medical Errors

Research has been critical to CRP expansion. Several studies have demonstrated that CRPs can enjoy physician support and operate without increasing liability risk. Nonetheless, research also shows that physicians remain concerned about their ability to communicate with patients and families after a harmful error and worry about liability risks including being sued, having their malpractice premiums raised, and having the event reported to the National Practitioner Data Bank (NPDB).5 Successful CRPS typically deploy a formal team, prioritize clinician and leadership buy-in, and engage liability insurers in their efforts. The table details the steps associated with the CRP model, the ethical rationale for each step, barriers to implementation, and strategies for overcoming them.

The growth of CRPs also reflects collaboration among diverse stakeholder groups, including patient advocates, health care organizations, plaintiff and defense attorneys, liability insurers, state medical associations, and legislators. Sustained stakeholder engagement that respects the diverse perspectives of each group has been vital, given the often opposing views these groups have espoused.
As CRPS proliferate, it will be important to address a few key challenges and open questions in implementing this ethical paradigm.


The article provides a number of recommendations for how healthcare providers can implement these principles. These include:
  • Developing open and honest communication with patients.
  • Providing timely and accurate information about the error.
  • Offering apologies and expressing empathy for the harm that has been caused.
  • Working with patients to develop a plan to address the consequences of the error.
  • Conducting a thorough investigation of the error to identify the root causes and prevent future errors.
  • Sharing the results of the investigation with patients and the public.

Wednesday, February 7, 2024

Listening to bridge societal divides

Santoro, E., & Markus, H. R. (2023).
Current opinion in psychology, 54, 101696.

Abstract

The U.S. is plagued by a variety of societal divides across political orientation, race, and gender, among others. Listening has the potential to be a key element in spanning these divides. Moreover, the benefits of listening for mitigating social division has become a culturally popular idea and practice. Recent evidence suggests that listening can bridge divides in at least two ways: by improving outgroup sentiment and by granting outgroup members greater status and respect. When reviewing this literature, we pay particular attention to mechanisms and to boundary conditions, as well as to the possibility that listening can backfire. We also review a variety of current interventions designed to encourage and improve listening at all levels of the culture cycle. The combination of recent evidence and the growing popular belief in the significance of listening heralds a bright future for research on the many ways that listening can diffuse stereotypes and improve attitudes underlying intergroup division.

The article is paywalled, which is not really helpful in spreading the word.  This information can be very helpful in couples and family therapy.  Here are my thoughts:

The idea that listening can help bridge societal divides is a powerful one. When we truly listen to someone from a different background, we open ourselves up to understanding their perspective and experiences. This can help to break down stereotypes and foster empathy.

Benefits of Listening:
  • Reduces prejudice: Studies have shown that listening to people from different groups can help to reduce prejudice. When we hear the stories of others, we are more likely to see them as individuals, rather than as members of a stereotyped group.
  • Builds trust: Listening can help to build trust between people from different groups. When we show that we are willing to listen to each other, we demonstrate that we are open to understanding and respecting each other's views.
  • Finds common ground: Even when people disagree, listening can help them to find common ground. By focusing on areas of agreement, rather than on differences, we can build a foundation for cooperation and collaboration.
Challenges of Listening:

It is important to acknowledge that listening is not always easy. There are a number of challenges that can make it difficult to truly hear and understand someone from a different background. These challenges include:
  • Bias: We all have biases, and these biases can influence the way we listen to others. It is important to be aware of our own biases and to try to set them aside when we are listening to someone else.
  • Distraction: In today's world, there are many distractions that can make it difficult to focus on what someone else is saying. It is important to create a quiet and distraction-free environment when we are trying to have a meaningful conversation with someone.
  • Discomfort: Talking about difficult topics can be uncomfortable. However, it is important to be willing to listen to these conversations, even if they make us feel uncomfortable.
Tips for Effective Listening:
  • Pay attention: Make eye contact and avoid interrupting the speaker.
  • Be open-minded: Try to see things from the speaker's perspective, even if you disagree with them.
  • Ask questions: Ask clarifying questions to make sure you understand what the speaker is saying.
  • Summarize: Briefly summarize what you have heard to show that you were paying attention.
  • By practicing these tips, we can become more effective listeners and, in turn, help to bridge the divides that separate us.

Monday, February 27, 2023

Domestic violence hotline calls will soon be invisible on your family phone plan

Ashley Belanger
ARS Technica
Originally published 17 FEB 23

Today, the Federal Communications Commission proposed rules to implement the Safe Connections Act, which President Joe Biden signed into law last December. Advocates consider the law a landmark move to stop tech abuse. Under the law, mobile service providers are required to help survivors of domestic abuse and sexual violence access resources and maintain critical lines of communication with friends, family, and support organizations.

Under the proposed rules, mobile service providers are required to separate a survivor’s line from a shared or family plan within two business days. Service providers must also “omit records of calls or text messages to certain hotlines from consumer-facing call and text message logs,” so that abusers cannot see when survivors are seeking help. Additionally, the FCC plans to launch a “Lifeline” program, providing emergency communications support for up to six months for survivors who can’t afford to pay for mobile services.

“These proposed rules would help survivors obtain separate service lines from shared accounts that include their abusers, protect the privacy of calls made by survivors to domestic abuse hotlines, and provide support for survivors who suffer from financial hardship through our affordability programs,” the FCC’s announcement said.

The FCC has already consulted with tech associations and domestic violence support organizations in forming the proposed rules, but now the public has a chance to comment. An FCC spokesperson confirmed to Ars that comments are open now. Crystal Justice, the National Domestic Violence Hotline’s chief external affairs officer, told Ars that it’s critical for survivors to submit comments to help inform FCC rules with their experiences of tech abuse.

To express comments, visit this link and fill in “22-238” as the proceeding number. That will auto-populate a field that says “Supporting Survivors of Domestic and Sexual Violence.”

FCC’s spokesperson told Ars that the initial public comment period will be open for 30 days after the rules are published in the federal register, and then a reply comment period will be open for 30 days after the initial comment period ends.

Sunday, January 15, 2023

How Hedges Impact Persuasion

Oba, Demi and Berger, Jonah A.
(July 23, 2022). 

Abstract

Communicators often hedge. Salespeople say that a product is probably the best, recommendation engines suggest movies they think you’ll like, and consumers say restaurants might have good service. But how does hedging impact persuasion? We suggest that different types of hedges may have different effects. Six studies support our theorizing, demonstrating that (1) the probabilistic likelihood hedges suggest and (2) whether they take a personal (vs. general) perspective both play an important role in driving persuasion. Further, the studies demonstrate that both effects are driven by a common mechanism: perceived confidence. Using hedges associated with higher likelihood, or that involve personal perspective, increases persuasion because they suggest communicators are more confident about what they are saying. This work contributes to the burgeoning literature on language in marketing, showcases how subtle linguistic features impact perceived confidence, and has clear implications for anyone trying to be more persuasive.

General Discussion

Communicating uncertainty is an inescapable part of marketplace interactions. Customer service representatives suggest solutions that “they think”will work, marketers inform buyers about risks a product “may” have, and consumers recommend restaurants that have the best food“in their opinion”.  Such communications are critical in determining which solutions are implemented, which products are bought, and which restaurants are visited.

But while it is clear that hedging is both frequent and important, less is known about its impact.  Do hedges always hurt persuasion?  If not, which hedges more or less persuasive, and why?

Six studies explore these questions. First, they demonstrate that different types of hedges have different effects. Consistent with our theorizing, hedges associated with higher likelihood of occurrence (Studies 1, 2A, 3, and 4A) or that take a personal (rather than general) perspective (Studies 1, 2B, 3, and 4B) are more persuasive. Further, hedges don’t always reduce persuasion (Studies 2A and 2B). Testing these effects using dozens of different hedges, across multiple domains, and using multiple measure of persuasion (including consequential choice) speaks to their robustness and generalizability.

Second, the studies demonstrate a common process that underlies these effects.  When communicators use hedges associated with higher likelihood, or a personal (rather than general) perspective, it makes them seem more confident. This, in turn, increases persuasion (Study 1, 3, 4A and 4B). Demonstrating these effects through mediation (Studies 1, 3, 4A and 4B) and moderation (Studies 4A and 4B) underscores robustness.Further, while other factors may contribute, the studies conducted here indicate full mediation by perceived confidence, highlighting its importance.


Psychologists and other mental health professionals may want to consider this research as part of psychotherapy.

Tuesday, August 9, 2022

You can handle the truth: Mispredicting the consequences of honest communication

Levine, E. E., & Cohen, T. R. (2018).
Journal of Experimental Psychology: General, 
147(9), 1400–1429. 

Abstract

People highly value the moral principle of honesty, and yet, they often avoid being honest with others. One reason people may avoid being completely honest is that honesty frequently conflicts with kindness: candidly sharing one’s opinions and feelings can hurt others and create social tension. In the present research, we explore the actual and predicted consequences of communicating honestly during difficult conversations. We compare honest communication to kind communication as well as a neutral control condition by randomly assigning individuals to be honest, kind, or conscious of their communication in every conversation with every person in their life for three days. We find that people significantly mispredict the consequences of communicating honestly: the experience of being honest is far more pleasurable, leads to greater levels of social connection, and does less relational harm than individuals expect. We establish these effects across two field experiments and two prediction experiments and we document the robustness of our results in a subsequent laboratory experiment. We explore the underlying mechanisms by qualitatively coding participants’ reflections during and following our experiments. This research contributes to our understanding of affective forecasting processes and uncovers fundamental insights on how communication and moral values shape well-being.

From the Discussion section

Our findings make several important contributions to our understanding of morality, affective forecasting, and human communication. First, we provide insight into why people avoid being honest with others. Our results suggest that individuals’ aversion to honesty is driven by a forecasting failure: Individuals expect honesty to be less pleasant and less socially connecting than it is. Furthermore, our studies suggest this is driven by individuals’ misguided fear of social rejection. Whereas prior work on mispredictions of social interactions has primarily examined how individuals misunderstand others or their preferences for interaction, the present research examines how individuals misunderstand others’ reactions to honest disclosure of thoughts and feelings, and how this shapes social communication.

Second, this research documents the broader consequences of being honest. Individuals’ predictions that honest communication would be less enjoyable and socially connecting than kind communication or one’s baseline communication were generally wrong. In the field experiment (Study 1a), participants in the honesty condition either felt similar or higher levels of social connection relative to participants in the kindness and control conditions. Participants in the honesty condition also derived greater long-term hedonic well-being and greater relational improvements relative to participants in the control condition. Furthermore, participants in Study 2 reported increased meaning in their life one week after engaging in their brief, but intense, honest conversation. Scholars have long claimed that morality promotes well-being, but to our knowledge, this is the first research to document how enacting specific moral principles promote different types of well-being.

Taken together, these findings suggest that individuals’ avoidance of honesty may be a mistake. By avoiding honesty, individuals miss out on opportunities that they appreciate in the long-run, and that they would want to repeat. Individuals’ choices about how to behave – in this case, whether or not to communicate honestly – seem to be driven primarily by expectations of enjoyment, but appreciation for these behaviors is driven by the experience of meaning. We encourage future research to further examine how affective forecasting failures may prevent individuals from finding meaning in their lives.

See the link above to the research.

Friday, June 24, 2022

Leaders with Multicultural Experiences Communicate and Lead More Effectively, Especially in Multinational Teams

J. G. Lu, R. I. Swaab, A. D. Galinsky
Organization Science
Published Online: 22 Jul 2021

Abstract

In an era of globalization, it is commonly assumed that multicultural experiences foster leadership effectiveness. However, little research has systematically tested this assumption. We develop a theoretical perspective that articulates how and when multicultural experiences increase leadership effectiveness. We hypothesize that broad multicultural experiences increase individuals’ leadership effectiveness by developing their communication competence. Because communication competence is particularly important for leading teams that are more multinational, we further hypothesize that individuals with broader multicultural experiences are particularly effective when leading more versus less multinational teams. Four studies test our theory using mixed methods (field survey, archival panel, field experiments) and diverse populations (corporate managers, soccer managers, hackathon leaders) in different countries (Australia, Britain, China, America). In Study 1, corporate managers with broader multicultural experiences were rated as more effective leaders, an effect mediated by communication competence. Analyzing a 25-year archival panel of English Premier League soccer managers, Study 2 replicates the positive effect of broad multicultural experiences using a team performance measure of leadership effectiveness. Importantly, this effect was moderated by team national diversity: soccer managers with broader multicultural experiences were particularly effective when leading teams with greater national diversity. Study 3 (digital health hackathon) and Study 4 (COVID-19 policy hackathon) replicate these effects in two field experiments, in which individuals with varying levels of multicultural experiences were randomly assigned to lead hackathon teams that naturally varied in national diversity. Overall, our research suggests that broad multicultural experiences help leaders communicate more competently and lead more effectively, especially when leading multinational teams.

From the Discussion

Practical Implications

Because of the rise of globalization, individuals and organizations increasingly value and invest in multicultural experiences. However, multicultural experiences are expensive. The present research lends support to the common belief that multicultural experiences foster leadership effectiveness (Karabell 2016, Pelos 2017). Notably, our studies consistently found that the breadth (but not the depth) of multicultural experiences predicted leadership effectiveness via communication competence. This finding suggests that organizations should ensure that expatriates are exposed to a broad set of experiences. For example, when structuring international assignments, organizations should consider exposing their employees to a range of foreign postings (e.g., global rotation programs) rather than one lengthy foreign posting (Suutari and Makel ¨ a¨ 2007). Similarly, individuals may consider pursuing multinational educational programs (e.g., global MBA) that allow them to engage with different cultures.

Just as individuals’ multicultural experiences are increasingly prevalent, so are multinational teams. The
present research examined three multinational team contexts with high ecological validity and real-world
consequences. Across these contexts, we provide evidence that multinational teams perform better when
led by leaders with broad multicultural experiences.

Friday, April 15, 2022

Strategic identity signaling in heterogeneous networks

T. Van der dos, M. Galesic, et al.
PNAS, 2022.
119 (10) e2117898119

Abstract

Individuals often signal identity information to facilitate assortment with partners who are likely to share norms, values, and goals. However, individuals may also be incentivized to encrypt their identity signals to avoid detection by dissimilar receivers, particularly when such detection is costly. Using mathematical modeling, this idea has previously been formalized into a theory of covert signaling. In this paper, we provide an empirical test of the theory of covert signaling in the context of political identity signaling surrounding the 2020 US presidential elections. To identify likely covert and overt signals on Twitter, we use methods relying on differences in detection between ingroup and outgroup receivers. We strengthen our experimental predictions with additional mathematical modeling and examine the usage of selected covert and overt tweets in a behavioral experiment. We find that participants strategically adjust their signaling behavior in response to the political constitution of their audiences. These results support our predictions and point to opportunities for further theoretical development. Our findings have implications for our understanding of political communication, social identity, pragmatics, hate speech, and the maintenance of cooperation in diverse populations.

Significance

Much of online conversation today consists of signaling one’s political identity. Although many signals are obvious to everyone, others are covert, recognizable to one’s ingroup while obscured from the outgroup. This type of covert identity signaling is critical for collaborations in a diverse society, but measuring covert signals has been difficult, slowing down theoretical development. We develop a method to detect covert and overt signals in tweets posted before the 2020 US presidential election and use a behavioral experiment to test predictions of a mathematical theory of covert signaling. Our results show that covert political signaling is more common when the perceived audience is politically diverse and open doors to a better understanding of communication in politically polarized societies.

From the Discussion

The theory predicts that individuals should use more covert signaling in more heterogeneous groups or when they are in the minority. We found support for this prediction in the ways people shared political speech in a behavioral experiment. We observed the highest levels of covert signaling when audiences consisted almost entirely of cross-partisans, supporting the notion that covert signaling is a strategy for avoiding detection by hostile outgroup members. Of note, we selected tweets for our study at a time of heightened partisan divisions: the four weeks preceding the 2020 US presidential election. Consequently, these tweets mostly discussed the opposing political party. This focus was reflected in our behavioral experiment, in which we did not observe an effect of audience composition when all members were (more or less extreme) copartisans. In that societal context, participants might have perceived the cost of dislikes to be minimal and have likely focused on partisan disputes in their real-life conversations happening around that time. Future work testing the theory of covert signaling should also examine signaling strategies in copartisan conversations during times of salient intragroup political divisions.


Editor's Note: Wondering if this research generalizes into other covert forms of communication during psychotherapy.

Friday, December 17, 2021

The Conversational Circumplex: Identifying, Prioritizing, and Pursuing Informational and Relational Motives in Conversation

M. Yeomans, M. Schweitzer, & A. WoodBrooks
Current Opinion in Psychology
Available online 11 October 2021

Abstract

The meaning of success in conversation depends on people’s goals. Often, individuals pursue multiple goals simultaneously, such as establishing shared understanding, making a favorable impression, and persuading a conversation partner. In this article, we introduce a novel theoretical framework, the Conversational Circumplex, to classify conversational motives along two key dimensions: 1) Informational: the extent to which a speaker’s motive focuses on giving and/or receiving accurate information and 2) Relational: the extent to which a speaker’s motive focuses on building the relationship. We use the conversational circumplex to underscore the multiplicity of conversational goals that people hold, and highlight the potential for individuals to have conflicting conversational goals (both intrapersonally and interpersonally) that make successful conversation a difficult challenge.

Conclusion

In this article, we introduce a novel framework, the Conversational Circumplex, to build our understanding of conversational motives. By introducing this framework, we provide a generative foundation for future scholarship and a useful tool for conversationalists to identify their own motives, discern others’ motives, and advance their goals more effectively in conversation. The meaning of success in a conversation requires that we start by understanding what conversationalists are hoping to achieve.

Note: This has implications for psychotherapy and other helping relationships.

Tuesday, November 30, 2021

Community standards of deception: Deception is perceived to be ethical when it prevents unnecessary harm

Levine, E. E. (2021). 
Journal of Experimental Psychology: 
General. Advance online publication. 
https://doi.org/10.1037/xge0001081

Abstract

We frequently claim that lying is wrong, despite modeling that it is often right. The present research sheds light on this tension by unearthing systematic cases in which people believe lying is ethical in everyday communication and by proposing and testing a theory to explain these cases. Using both inductive and experimental approaches, the present research finds that deception is perceived to be ethical and individuals want to be deceived when deception is perceived to prevent unnecessary harm. This research identifies eight community standards of deception: rules of deception that most people abide by and recognize once articulated, but have never previously been codified. These standards clarify systematic circumstances in which deception is perceived to prevent unnecessary harm, and therefore, circumstances in which deception is perceived to be ethical. This work also documents how perceptions of unnecessary harm influence the use and judgment of deception in everyday life, above and beyond other moral concerns. These findings provide insight into when and why people value honesty and paves the way for future research on when and why people embrace deception. 

From the Discussion

First, this work illuminates how people fundamentally think about deception. Specifically, this work identifies systematic circumstances in which deception is seen as more ethical than honesty, and it provides an organizing framework for understanding these circumstances. A large body of research identifies features of lies that make them seem more or less justifiable and therefore, that lead people to tell greater or fewer lies (e.g., Effron, 2018; Rogers, Zeckhauser, Gino, Norton, & Schweitzer, 2017; Shalvi, Dana, Handgraaf, & De Dreu, 2011). However, little research addresses whether people, upon, introspection, ever actually believe it is right to tell lies; that is, whether lying is ever a morally superior strategy to truth-telling. The present research finds that people believe lying is the right thing to do when it prevents unnecessary harm. Notably, this finding reveals that lay people seem to have a relatively pragmatic view of deception and honesty. Rather than believing deception is a categorical vice – for example, because it damages social trust (Bok 1978; Kant, 1949) or undermines autonomy (Bacon, 1872; Harris, 2011; Kant, 1959/1785) - people seem to conceptualize deception as a tactic that can and should be used to regulate another vice: harm.

Although this view of deception runs counter to prevailing normative claims and much of the existing scholarship in psychology and economics, which paints deception as generally unethical, it is important to note that this idea – that deception is and should be used pragmatically - is not novel. In fact, many of the rules of deception identified in the present research are alluded to in other philosophical, religious, and practical discussions of deception (see Table 2 for a review). Until now, however, these ideas have been siloed in disparate literatures, and behavioral scientists have lacked a parsimonious framework for understanding why individuals endorse deception in various circumstances. The present research identifies a common psychology that explains a number of seemingly unrelated “exceptions” to the norm of honesty, thereby unifying findings and arguments across psychology, religion, and philosophy under a common theoretical framework.

Friday, October 29, 2021

Harms of AI

Daron Acemoglu
NBER Working Paper No. 29247
September 2021

Abstract

This essay discusses several potential economic, political and social costs of the current path of AI technologies. I argue that if AI continues to be deployed along its current trajectory and remains unregulated, it may produce various social, economic and political harms. These include: damaging competition, consumer privacy and consumer choice; excessively automating work, fueling inequality, inefficiently pushing down wages, and failing to improve worker productivity; and damaging political discourse, democracy's most fundamental lifeblood. Although there is no conclusive evidence suggesting that these costs are imminent or substantial, it may be useful to understand them before they are fully realized and become harder or even impossible to reverse, precisely because of AI's promising and wide-reaching potential. I also suggest that these costs are not inherent to the nature of AI technologies, but are related to how they are being used and developed at the moment - to empower corporations and governments against workers and citizens. As a result, efforts to limit and reverse these costs may need to rely on regulation and policies to redirect AI research. Attempts to contain them just by promoting competition may be insufficient.

Conclusion

In this essay, I explored several potential economic, political and social costs of the current path of AI technologies. I suggested that if AI continues to be deployed along its current trajectory and remains unregulated, then it can harm competition, consumer privacy and consumer choice, it may excessively automate work, fuel inequality, inefficiently push down wages, and fail to improve productivity. It may also make political discourse increasingly distorted, cutting one of the lifelines of democracy. I also mentioned several other potential social costs from the current path of AI research.

I should emphasize again that all of these potential harms are theoretical. Although there is much evidence indicating that not all is well with the deployment of AI technologies and the problems of increasing market power, disappearance of work, inequality, low wages, and meaningful challenges to democratic discourse and practice are all real, we do not have sufficient evidence to be sure that AI has been a serious contributor to these troubling trends.  Nevertheless, precisely because AI is a promising technological platform, aiming to transform every sector of the economy and every aspect of our social lives, it is imperative for us to study what its downsides are, especially on its current trajectory. It is in this spirit that I discussed the potential costs of AI this paper.

Friday, August 13, 2021

Moral dilemmas and trust in leaders during a global health crisis

Everett, J.A.C., Colombatto, C., Awad, E. et al. 
Nat Hum Behav (2021). 

Abstract

Trust in leaders is central to citizen compliance with public policies. One potential determinant of trust is how leaders resolve conflicts between utilitarian and non-utilitarian ethical principles in moral dilemmas. Past research suggests that utilitarian responses to dilemmas can both erode and enhance trust in leaders: sacrificing some people to save many others (‘instrumental harm’) reduces trust, while maximizing the welfare of everyone equally (‘impartial beneficence’) may increase trust. In a multi-site experiment spanning 22 countries on six continents, participants (N = 23,929) completed self-report (N = 17,591) and behavioural (N = 12,638) measures of trust in leaders who endorsed utilitarian or non-utilitarian principles in dilemmas concerning the COVID-19 pandemic. Across both the self-report and behavioural measures, endorsement of instrumental harm decreased trust, while endorsement of impartial beneficence increased trust. These results show how support for different ethical principles can impact trust in leaders, and inform effective public communication during times of global crisis.

Discussion

The COVID-19 pandemic has raised a number of moral dilemmas that engender conflicts between utilitarian and non-utilitarian ethical principles. Building on past work on utilitarianism and trust, we tested the hypothesis that endorsement of utilitarian solutions to pandemic dilemmas would impact trust in leaders. Specifically, in line with suggestions from previous work and case studies of public communications during the early stages of the pandemic, we predicted that endorsing instrumental harm would decrease trust in leaders, while endorsing impartial beneficence would increase trust.

Sunday, June 6, 2021

Shared Reality: From Sharing-Is-Believing to Merging Minds

Higgins, E. T., Rossignac-Milon, M., & 
Echterhoff, G. (2021). 
Current Directions in Psychological 
Science, 30(2), 103–110. 
https://doi.org/10.1177/0963721421992027 

Abstract

Humans are fundamentally motivated to create a sense of shared reality—the perceived commonality of inner states (feeling, beliefs, and concerns about the world) with other people. This shared reality establishes a sense of both social connection and understanding the world. Research on shared reality has burgeoned in recent decades. We first review evidence for a basic building block of shared-reality creation: sharing-is-believing, whereby communicators tune their descriptions to align with their communication partner’s attitude about something, which in turn shapes their recall. Next, we describe recent developments moving beyond this basic building block to explore generalized shared reality about the world at large, which promotes interpersonal closeness and epistemic certainty. Together, this body of work exemplifies the synergy between relational and epistemic motives. Finally, we discuss the potential for another form of shared reality—shared relevance—to bridge disparate realities.

From Concluding Remarks

The field of shared reality has made significant progress in advancing understanding of how humans share inner states as a way to connect with each other and make sense of the world. These advancements shed new light on current issues. For instance, exaggerated perceptions of consensus generated by filter bubbles and echo chambers may inflate the experience of shared reality on social media, especially given the intensifying effects of collective attention (Shteynberg et al., 2020) and transmission through social networks (Kashima et al., 2018). By shaping attitudes and ideological beliefs (see Jost et al., 2018; Stern & Ondish, 2018), shared reality can perpetuate insular views and exacerbate ideological divisions. But there is a different kind of shared reality that could be beneficial in this context: shared perceptions of what is worthy of attention. Wanting to establish shared relevance is so central to human motivation that even infants seek to establish it with their caregivers by pointing out objects deserving of co-attention (Higgins, 2016).

Saturday, January 30, 2021

Scientific communication in a post-truth society

S. Iyengar & D. S. Massey
PNAS Apr 2019, 116 (16) 7656-7661

Abstract

Within the scientific community, much attention has focused on improving communications between scientists, policy makers, and the public. To date, efforts have centered on improving the content, accessibility, and delivery of scientific communications. Here we argue that in the current political and media environment faulty communication is no longer the core of the problem. Distrust in the scientific enterprise and misperceptions of scientific knowledge increasingly stem less from problems of communication and more from the widespread dissemination of misleading and biased information. We describe the profound structural shifts in the media environment that have occurred in recent decades and their connection to public policy decisions and technological changes. We explain how these shifts have enabled unscrupulous actors with ulterior motives increasingly to circulate fake news, misinformation, and disinformation with the help of trolls, bots, and respondent-driven algorithms. We document the high degree of partisan animosity, implicit ideological bias, political polarization, and politically motivated reasoning that now prevail in the public sphere and offer an actual example of how clearly stated scientific conclusions can be systematically perverted in the media through an internet-based campaign of disinformation and misinformation. We suggest that, in addition to attending to the clarity of their communications, scientists must also develop online strategies to counteract campaigns of misinformation and disinformation that will inevitably follow the release of findings threatening to partisans on either end of the political spectrum.

(cut)

At this point, probably the best that can be done is for scientists and their scientific associations to anticipate campaigns of misinformation and disinformation and to proactively develop online strategies and internet platforms to counteract them when they occur. For example, the National Academies of Science, Engineering, and Medicine could form a consortium of professional scientific organizations to fund the creation of a media and internet operation that monitors networks, channels, and web platforms known to spread false and misleading scientific information so as to be able to respond quickly with a countervailing campaign of rebuttal based on accurate information through Facebook, Twitter, and other forms of social media.

Saturday, August 8, 2020

How behavioural sciences can promote truth, autonomy and democratic discourse online

Lorenz-Spreen, P., Lewandowsky,
S., Sunstein, C.R. et al.
Nat Hum Behav (2020).
https://doi.org/10.1038/s41562-020-0889-7

Abstract

Public opinion is shaped in significant part by online content, spread via social media and curated algorithmically. The current online ecosystem has been designed predominantly to capture user attention rather than to promote deliberate cognition and autonomous choice; information overload, finely tuned personalization and distorted social cues, in turn, pave the way for manipulation and the spread of false information. How can transparency and autonomy be promoted instead, thus fostering the positive potential of the web? Effective web governance informed by behavioural research is critically needed to empower individuals online. We identify technologically available yet largely untapped cues that can be harnessed to indicate the epistemic quality of online content, the factors underlying algorithmic decisions and the degree of consensus in online debates. We then map out two classes of behavioural interventions—nudging and boosting— that enlist these cues to redesign online environments for informed and autonomous choice.

Here is an excerpt:

Another competence that could be boosted to help users deal more expertly with information they encounter online is the ability to make inferences about the reliability of information based on the social context from which it originates. The structure and details of the entire cascade of individuals who have previously shared an article on social media has been shown to serve as proxies for epistemic quality. More specifically, the sharing cascade contains metrics such as the depth and breadth of dissemination by others, with deep and narrow cascades indicating extreme or niche topics and breadth indicating widely discussed issues. A boosting intervention could provide this information (Fig. 3a) to display the full history of a post, including the original source, the friends and public users who disseminated it, and the timing of the process (showing, for example, if the information is old news that has been repeatedly and artificially amplified). Cascade statistics teaches concepts that may take some practice to read and interpret, and one may need to experience a number of cascades to learn to recognize informative patterns.

Wednesday, July 22, 2020

Inference from explanation.

Kirfel, L., Icard, T., & Gerstenberg, T.
(2020, May 22).
https://doi.org/10.31234/osf.io/x5mqc

Abstract

What do we learn from a causal explanation? Upon being told that "The fire occurred because a lit match was dropped", we learn that both of these events occurred, and that there is a causal relationship between them. However, causal explanations of the kind "E because C" typically disclose much more than what is explicitly stated. Here, we offer a communication-theoretic account of causal explanations and show specifically that explanations can provide information about the extent to which a cited cause is normal or abnormal, and about the causal structure of the situation. In Experiment 1, we demonstrate that people infer the normality of a cause from an explanation when they know the underlying causal structure. In Experiment 2, we show that people infer the causal structure from an explanation if they know the normality of the cited cause. We find these patterns both for scenarios that manipulate the statistical and prescriptive normality of events. Finally, we consider how the communicative function of explanations, as highlighted in this series of experiments, may help to elucidate the distinctive roles that normality and causal structure play in causal explanation.

Conclusion

In this paper, we investigate the communicative dimensions of explanation, revealing some of the rich and subtle inferences people draw from them. We find that people are able to infer additional information from a causal explanation beyond what was explicitly communicated, such as causal structure and normality of the causes.  Our studies show that people make these inferences in part by appeal to what they themselves would judge reasonable to say across different possible scenarios. The overall pattern of judgments and inferences brings us closer to a full understanding of how causal explanations function inhuman discourse and behavior, while also raising new questions concerning the prominent role of norms in causal judgment and the function of causal explanation more broadly.

Editor's Note: This research has significant implications for psychotherapy.


Tuesday, July 14, 2020

The Pandemic Experts Are Not Okay

Ed Yong
The Atlantic
Originally posted 7 July 20

Here is an excerpt:

The field of public health demands a particular way of thinking. Unlike medicine, which is about saving individual patients, public health is about protecting the well-being of entire communities. Its problems, from malnutrition to addiction to epidemics, are broader in scope. Its successes come incrementally, slowly, and through the sustained efforts of large groups of people. As Natalie Dean, a biostatistician at the University of Florida, told me, “The pandemic is a huge problem, but I’m not afraid of huge problems.”

The more successful public health is, however, the more people take it for granted. Funding has dwindled since the 2008 recession. Many jobs have disappeared. Now that the entire country needs public-health advice, there aren’t enough people qualified to offer it. The number of epidemiologists who specialize in pandemic-level infectious threats is small enough that “I think I know them all,” says Caitlin Rivers, who studies outbreaks at the Johns Hopkins Center for Health Security.

The people doing this work have had to recalibrate their lives. From March to May, Colin Carlson, a research professor at Georgetown University who specializes in infectious diseases, spent most of his time traversing the short gap between his bed and his desk. He worked relentlessly and knocked back coffee, even though it exacerbates his severe anxiety: The cost was worth it, he felt, when the United States still seemed to have a chance of controlling COVID-19.

The info is here.

Monday, July 13, 2020

Our Minds Aren’t Equipped for This Kind of Reopening

TessWilkinson-Ryan
The Atlantic
Originally published 6 July 20

Here is the conclusion:

At the least, government agencies must promulgate clear, explicit norms and rules to facilitate cooperative choices. Most people congregating in tight spaces are telling themselves a story about why what they are doing is okay. Such stories flourish under confusing or ambivalent norms. People are not irrevocably chaotic decision makers; the level of clarity in human thinking depends on how hard a problem is. I know with certainty whether I’m staying home, but the confidence interval around “I am being careful” is really wide. Concrete guidance makes challenges easier to resolve. If masks work, states and communities should require them unequivocally. Cognitive biases are the reason to mark off six-foot spaces on the supermarket floor or circles in the grass at a park.

For social-distancing shaming to be a valuable public-health tool, average citizens should reserve it for overt defiance of clear official directives—failure to wear a mask when one is required—rather than mere cases of flawed judgment. In the meantime, money and power are located in public and private institutions that have access to public-health experts and the ability to propose specific behavioral norms. The bad judgments that really deserve shaming include the failure to facilitate testing, failure to protect essential workers, failure to release larger numbers of prisoners from facilities that have become COVID-19 hot spots, and failure to create the material conditions that permit strict isolation. America’s half-hearted reopening is a psychological morass, a setup for defeat that will be easy to blame on irresponsible individuals while culpable institutions evade scrutiny.

The info is here.

Thursday, April 30, 2020

Difficult Conversations: Navigating the Tension between Honesty and Benevolence

E. Levine, A. Roberts, & T. Cohen
PsyArXiv
Originally published 18 Jul 19

Abstract

Difficult conversations are a necessary part of everyday life. To help children, employees, and partners learn and improve, parents, managers, and significant others are frequently tasked with the unpleasant job of delivering negative news and critical feedback. Despite the long-term benefits of these conversations, communicators approach them with trepidation, in part, because they perceive them as involving intractable moral conflict between being honest and being kind. In this article, we review recent research on egocentrism, ethics, and communication to explain why communicators overestimate the degree to which honesty and benevolence conflict during difficult conversations, document the conversational missteps people make as a result of this erred perception, and propose more effective conversational strategies that honor the long-term compatibility of honesty and benevolence. This review sheds light on the psychology of moral tradeoffs in conversation, and provides practical advice on how to deliver unpleasant information in ways that improve recipients’ welfare.

From the Summary:

Difficult conversations that require the delivery of negative information from communicators to targets involve perceived moral conflict between honesty and benevolence. We suggest that communicators exaggerate this conflict. By focusing on the short-term harm and unpleasantness associated with difficult conversations, communicators fail to realize that honesty and benevolence are actually compatible in many cases. Providing honest feedback can help a target to learn and grow, thereby improving the target’s overall welfare. Rather than attempting to resolve the honesty-benevolence dilemma via communication strategies that focus narrowly on the short-term conflict between honesty and emotional harm, we recommend that communicators instead invoke communication strategies that integrate and maximize both honesty and benevolence to ensure that difficult conversations lead to long-term welfare improvements for targets. Future research should explore the traits, mindsets, and contexts that might facilitate this approach. For example, creative people may be more adept at integrative solutions to the perceived honesty-dilemma conflict, and people who are less myopic and more cognizant of the future consequences of their choices may be better at recognizing the long-term benefits of honesty.

The info is here.

This research has relevance to psychotherapy.

Wednesday, December 11, 2019

When Assessing Novel Risks, Facts Are Not Enough

Baruch Fischoff
Scientific American
September 2019

Here is an excerpt:

To start off, we wanted to figure out how well the general public understands the risks they face in everyday life. We asked groups of laypeople to estimate the annual death toll from causes such as drowning, emphysema and homicide and then compared their estimates with scientific ones. Based on previous research, we expected that people would make generally accurate predictions but that they would overestimate deaths from causes that get splashy or frequent headlines—murders, tornadoes—and underestimate deaths from “quiet killers,” such as stroke and asthma, that do not make big news as often.

Overall, our predictions fared well. People overestimated highly reported causes of death and underestimated ones that received less attention. Images of terror attacks, for example, might explain why people who watch more television news worry more about terrorism than individuals who rarely watch. But one puzzling result emerged when we probed these beliefs. People who were strongly opposed to nuclear power believed that it had a very low annual death toll. Why, then, would they be against it? The apparent paradox made us wonder if by asking them to predict average annual death tolls, we had defined risk too narrowly. So, in a new set of questions we asked what risk really meant to people. When we did, we found that those opposed to nuclear power thought the technology had a greater potential to cause widespread catastrophes. That pattern held true for other technologies as well.

To find out whether knowing more about a technology changed this pattern, we asked technical experts the same questions. The experts generally agreed with laypeople about nuclear power's death toll for a typical year: low. But when they defined risk themselves, on a broader time frame, they saw less potential for problems. The general public, unlike the experts, emphasized what could happen in a very bad year. The public and the experts were talking past each other and focusing on different parts of reality.

The info is here.

Monday, November 18, 2019

Understanding behavioral ethics can strengthen your compliance program

Jeffrey Kaplan
The FCPA Blog
Originally posted October 21, 2019

Behavioral ethics is a well-known field of social science which shows how — due to various cognitive biases — “we are not as ethical as we think.” Behavioral compliance and ethics (which is less well known) attempts to use behavioral ethics insights to develop and maintain effective compliance programs. In this post I explore some of the ways that this can be done.

Behavioral C&E should be viewed on two levels. The first could be called specific behavioral C&E lessons, meaning enhancements to the various discrete C&E program elements — e.g., risk assessment, training — based on behavioral ethics insights.   Several of these are discussed below.

The second — and more general — aspect of behavioral C&E is the above-mentioned overarching finding that we are not as ethical as we think. The importance of this general lesson is based on the notion that the greatest challenges to having effective C&E programs in organizations is often more about the “will” than the “way.”

That is, what is lacking in many business organizations is an understanding that strong C&E is truly necessary. After all, if we are as ethical than we think, then effective risk mitigation would be just a matter of finding the right punishment for an offense and the power of logical thinking would do the rest. Behavioral ethics teaches that that assumption is ill-founded.

The info is here.