Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Trust. Show all posts
Showing posts with label Trust. Show all posts

Tuesday, January 23, 2024

What Is It That You Want Me To Do? Guidance for Ethics Consultants in Complex Discharge Cases

Omelianchuk, A., Ansari, A.A. & Parsi, K.
HEC Forum (2023).

Abstract

Some of the most difficult consultations for an ethics consultant to resolve are those in which the patient is ready to leave the acute-care setting, but the patient or family refuses the plan, or the plan is impeded by deficiencies in the healthcare system. Either way, the patient is “stuck” in the hospital and the ethics consultant is called to help get the patient “unstuck.” These encounters, which we call “complex discharges,” are beset with tensions between the interests of the institution and the interests of the patient as well as tensions within the ethics consultant whose commitments are shaped both by the values of the organization and the values of their own profession. The clinical ethics literature on this topic is limited and provides little guidance. What is needed is guidance for consultants operating at the bedside and for those participating at a higher organizational level. To fill this gap, we offer guidance for facilitating a fair process designed to resolve the conflict without resorting to coercive legal measures. We reflect on three cases to argue that the approach of the consultant is generally one of mediation in these types of disputes. For patients who lack decision making capacity and lack a surrogate decision maker, we recommend the creation of a complex discharge committee within the organization so that ethics consultants can properly discharge their duties to assist patients who are unable to advocate for themselves through a fair and transparent process.

The article is paywalled.  Please contact the author for full copy.

Here is my summary:
  • Ethics consultants face diverse patient situations, including lack of desire to leave, potential mental health issues, and financial/space constraints.
  • Fair discharge processes are crucial, through mediation or multidisciplinary committees, balancing patient needs with system limitations.
  • "Conveyor belt" healthcare can strain trust and create discharge complexities.
  • The ethics consultant role is valuable but limited, suggesting standing "complex case committees" with diverse expertise for effective, creative solutions.
In essence, this summary highlights the need for a more nuanced and collaborative approach to complex discharges, prioritizing patient well-being while recognizing systemic constraints.

Thursday, October 26, 2023

The Neuroscience of Trust

Paul J. Zak
Harvard Business Review
Originally posted January-February 2017

Here is an excerpt:

The Return on Trust

After identifying and measuring the managerial behaviors that sustain trust in organizations, my team and I tested the impact of trust on business performance. We did this in several ways. First, we gathered evidence from a dozen companies that have launched policy changes to raise trust (most were motivated by a slump in their profits or market share). Second, we conducted the field experiments mentioned earlier: In two businesses where trust varies by department, my team gave groups of employees specific tasks, gauged their productivity and innovation in those tasks, and gathered very detailed data—including direct measures of brain activity—showing that trust improves performance. And third, with the help of an independent survey firm, we collected data in February 2016 from a nationally representative sample of 1,095 working adults in the U.S. The findings from all three sources were similar, but I will focus on what we learned from the national data since itʼs generalizable.

By surveying the employees about the extent to which firms practiced the eight behaviors, we were able to calculate the level of trust for each organization. (To avoid priming respondents, we never used the word “trust” in surveys.) The U.S. average for organizational trust was 70% (out of a possible 100%). Fully 47% of respondents worked in organizations where trust was below the average, with one firm scoring an abysmally low 15%. Overall, companies scored lowest on recognizing excellence and sharing information (67% and 68%, respectively). So the data suggests that the average U.S. company could enhance trust by
improving in these two areas—even if it didnʼt improve in the other six.

The effect of trust on self-reported work performance was powerful.  Respondents whose companies were in the top quartile indicated they had 106% more energy and were 76% more engaged at work than respondents whose firms were in the bottom quartile. They also reported being 50% more productive
—which is consistent with our objective measures of productivity from studies we have done with employees at work. Trust had a major impact on employee loyalty as well: Compared with employees at low-trust companies, 50% more of those working at high-trust organizations planned to stay with their employer over the next year, and 88% more said they would recommend their company to family and friends as a place to work.


Here is a summary of the key points from the article:
  • Trust is crucial for social interactions and has implications for economic, political, and healthcare outcomes. There are two main types of trust - emotional trust and cognitive trust.
  • Emotional trust develops early in life through attachments and is more implicit, while cognitive trust relies on reasoning and develops later. Both rely on brain regions involved in reward, emotion regulation, understanding others' mental states, and decision making.
  • Oxytocin and vasopressin play key roles in emotional trust by facilitating social bonding and attachment. Disruptions to these systems are linked to social disorders like autism.
  • The prefrontal cortex, amygdala, and striatum are involved in cognitive trust judgments and updating trustworthiness based on new evidence. Damage to prefrontal regions impairs updating of trustworthiness.
  • Trust engages the brain's reward circuitry. Betrayals of trust activate pain and emotion regulation circuits. Trustworthiness cues engage the mentalizing network for inferring others' intentions.
  • Neuroimaging studies show trust engage brain regions involved in reward, emotion regulation, understanding mental states, and decision making. Oxytocin administration increases trusting behavior.
  • Understanding the neuroscience of trust can inform efforts to build trust in healthcare, economic, political, and other social domains. More research is needed on how trust develops over the lifespan.

Wednesday, October 4, 2023

Humans’ Bias Blind Spot and Its Societal Significance

Pronin, E., & Hazel, L. (2023).
Current Directions in Psychological Science, 0(0).

Abstract

Human beings have a bias blind spot. We see bias all around us but sometimes not in ourselves. This asymmetry hinders self-knowledge and fuels interpersonal misunderstanding and conflict. It is rooted in cognitive mechanics differentiating self- and social perception as well as in self-esteem motives. It generalizes across social, cognitive, and behavioral biases; begins in childhood; and appears across cultures. People show a bias blind spot in high-stakes contexts, including investing, medicine, human resources, and law. Strategies for addressing the problem are described.

(cut)

Bias-limiting procedures

When it comes to eliminating bias, attempts to overcome it via conscious effort and educational training are not ideal. A different strategy is worth considering, when possible: preventing people’s biases from having a chance to operate in the first place, by limiting their access to biasing information. Examples include conducting auditions behind a screen (discussed earlier) and blind review of journal submissions. If fully blocking access to potentially biasing information is not possible or carries more costs than benefits, another less stringent option is worth considering, that is, controlling when the information is presented so that potentially biasing information comes late, ideally after a tentative judgment is made (e.g., “sequential unmasking”; Dror, 2018; “temporary cloaking”; Kang, 2021).

Because of the BBS, people can be resistant to procedures like this that limit their access to biasing information (see Fig. 3). For example, forensics experts prefer consciously trying to avoid bias over being shielded from even irrelevant biasing information (Kukucka et al., 2017). When high school teachers and ensemble singers were asked to assess blinding procedures (in auditioning and grading), they opposed them more for their own group than for the other group and even more for themselves personally (Pronin et al., 2022). This opposition is consistent with experiments showing that people are unconcerned about the effects of biasing decision processes when it comes to their own decisions (Hansen et al., 2014). In those experiments, participants made judgments using a biasing decision procedure (e.g., judging the quality of paintings only after looking to see if someone famous painted them). They readily acknowledged that the procedure was biased, nonetheless made decisions that were biased by that procedure, and then insisted that their conclusions were objective. This unwarranted confidence is a barrier to the self-imposition of bias-reducing procedures. It suggests the need for adopting procedures like this at the policy level rather than counting on individuals or their organizations to do so.

A different bias-limiting procedure that may induce resistance for these same reasons, and that therefore may also benefit from institutional or policy-level implementation, involves precommitting to decision criteria (e.g., Norton et al., 2004; Uhlmann & Cohen, 2005). For example, the human resources officer who precommits to judging job applicants more on the basis of industry experience versus educational background cannot then change that emphasis after seeing that their favorite candidate has unusually impressive academic credentials. This logic is incorporated, for example, into the system of allocating donor organs in the United States, which has explicit and predetermined criteria for making those allocations in order to avoid the possibility of bias in this high-stakes arena. When decision makers are instructed to provide objective criteria for their decision not before making that decision but rather when providing it—that is, the more typical request made of them—this not only makes bias more likely but also, because of the BBS, may even leave decision makers more confident in their objectivity than if they had not been asked to provide those criteria at all.

Here's my brief summary:

The article discusses the concept of the bias blind spot, which refers to people's tendency to recognize bias in others more readily than in themselves. Studies have consistently shown that people rate themselves as less susceptible to various biases than the average person. The bias blind spot occurs even for well-known biases that people readily accept exist. This blind spot has important societal implications, as it impedes recognition of one's own biases. It also leads to assuming others are more biased than oneself, resulting in decreased trust. Overcoming the bias blind spot is challenging but important for issues from prejudice to politics. It requires actively considering one's own potential biases when making evaluations about oneself or others.

Wednesday, July 26, 2023

Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions

Krügel, S., Ostermaier, A. & Uhl, M.
Philos. Technol. 35, 17 (2022).

Abstract

Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest that AI is overtrusted rather than distrusted. We suggest digital literacy as a potential remedy to ensure the responsible use of AI.

Summary

Background: Artificial intelligence (AI) is increasingly being used to make ethical decisions. However, there is a concern that AI-powered advisors may not be trustworthy, due to factors such as bias and opacity.

Research question: The authors of this article investigated whether humans trust AI-powered advisors for ethical decisions, even when they know that the advisor is untrustworthy.

Methods: The authors conducted a series of experiments in which participants were asked to make ethical decisions with the help of an AI advisor. The advisor was either trustworthy or untrustworthy, and the participants were aware of this.

Results: The authors found that participants were more likely to trust the AI advisor, even when they knew that it was untrustworthy. This was especially true when the advisor was able to provide a convincing justification for its advice.

Conclusions: The authors concluded that humans are susceptible to "zombie trust" in AI-powered advisors. This means that we may trust AI advisors even when we know that they are untrustworthy. This is a concerning finding, as it could lead us to make bad decisions based on the advice of untrustworthy AI advisors.  By contrast, decision-makers do disregard advice from a human convicted criminal.

The article also discusses the implications of these findings for the development and use of AI-powered advisors. The authors suggest that it is important to make AI advisors more transparent and accountable, in order to reduce the risk of zombie trust. They also suggest that we need to educate people about the potential for AI advisors to be untrustworthy.

Tuesday, May 2, 2023

Lies and bullshit: The negative effects of misinformation grow stronger over time

Petrocelli, J. V., Seta, C. E., & Seta, J. J. (2023). 
Applied Cognitive Psychology, 37(2), 409–418. 
https://doi.org/10.1002/acp.4043

Abstract

In a world where exposure to untrustworthy communicators is common, trust has become more important than ever for effective marketing. Nevertheless, we know very little about the long-term consequences of exposure to untrustworthy sources, such bullshitters. This research examines how untrustworthy sources—liars and bullshitters—influence consumer attitudes toward a product. Frankfurt's (1986) insidious bullshit hypothesis (i.e., bullshitting is evaluated less negatively than lying but bullshit can be more harmful than are lies) is examined within a traditional sleeper effect—a persuasive influence that increases, rather than decays over time. We obtained a sleeper effect after participants learned that the source of the message was either a liar or a bullshitter. However, compared to the liar source condition, the same message from a bullshitter resulted in more extreme immediate and delayed attitudes that were in line with an otherwise discounted persuasive message (i.e., an advertisement). Interestingly, attitudes returned to control condition levels when a bullshitter was the source of the message, suggesting that knowing an initially discounted message may be potentially accurate/inaccurate (as is true with bullshit, but not lies) does not result in the long-term discounting of that message. We discuss implications for marketing and other contexts of persuasion.

General Discussion

There is a considerable body of knowledge about the antecedents and consequences of lying in marketing and other contexts (e.g., Ekman, 1985), but much less is known about the other untrustworthy source: The Bullshitter. The current investigation suggests that the distinction between bullshitting and lying is important to marketing and to persuasion more generally. People are exposed to scores of lies and bullshit every day and this exposure has increased dramatically as the use of the internet has shifted from a platform for socializing to a source of information (e.g., Di Domenico et al., 2021). Because things such as truth status and source status fade faster than familiarity, illusory truth effects for consumer products can emerge after only 3 days post-initial exposure (Skurnik et al., 2005), and within the hour for basic knowledge questions (Fazio et al., 2015). As mirrored in our conditions that received discounting cues after the initial attitude information, at times people are lied to, or bullshitted, and only learn afterwards they were deceived. It is then that these untrustworthy sources appear to have a sleeper effect creating unwarranted and undiscounted attitudes.

It should be noted that our data do not suggest that the impact of lie and bullshit discounting cues fade differentially. However, the discounting cue in the bullshit condition had less of an immediate and long-term suppression effect than in the lie condition. In fact, after 14 days, the bullshit communication not only had more of an influence on attitudes, but the influence was not significantly different from that of the control communication. This finding suggests that bullshit can be more insidious than lies. As it relates to marketing, the insidious nature of exposure to bullshit can create false beliefs that subsequently affect behavior, even when people have been told that the information came from a person known to spread bullshit. The insidious nature of bullshit is magnified by the fact that even when it is clear that one is expressing his/her opinion via bullshit, people do not appear to hold the bullshitter to the same standard as the liar (Frankfurt, 1986). People may think that at least the bullshitter often believes his/her own bullshit, whereas the liar knows his/her statement is not true (Bernal, 2006; Preti, 2006; Reisch, 2006). Because of this difference, what may appear to be harmless communications from a bullshitter may have serious repercussions for consumers and organizations. Additionally, along with the research of Foos et al. (2016), the present research suggests that the harmful influence of untrustworthy sources may not be recognized initially but appears over time. The present research suggests that efforts to fight the consequences of fake news (see Atkinson, 2019) are more difficult because of the sleeper effect. The negative effects of unsubstantiated or false information may not only persist but may grow stronger over time.

Sunday, February 5, 2023

I’m a psychology expert in Finland, the No. 1 happiest country in the world—here are 3 things we never do

Frank Martela
CNBC.com
Originally posted 5 Jan 23

For five years in a row, Finland has ranked No. 1 as the happiest country in the world, according to the World Happiness Report. 

In 2022′s report, people in 156 countries were asked to “value their lives today on a 0 to 10 scale, with the worst possible life as a 0.” It also looks at factors that contribute to social support, life expectancy, generosity and absence of corruption.

As a Finnish philosopher and psychology researcher who studies the fundamentals of happiness, I’m often asked: What exactly makes people in Finland so exceptionally satisfied with their lives?

To maintain a high quality of life, here are three things we never do:

1. We don’t compare ourselves to our neighbors.

Focus more on what makes you happy and less on looking successful. The first step to true happiness is to set your own standards, instead of comparing yourself to others.

2. We don’t overlook the benefits of nature.

Spending time in nature increases our vitality, well-being and a gives us a sense of personal growth. Find ways to add some greenery to your life, even if it’s just buying a few plants for your home.

3. We don’t break the community circle of trust.

Think about how you can show up for your community. How can you create more trust? How can you support policies that build upon that trust? Small acts like opening doors for strangers or giving up a seat on the train makes a difference, too.

Sunday, December 25, 2022

Belief in karma is associated with perceived (but not actual) trustworthiness

H.H. Ong, A.M. Evans, et al.
Judgment and Decision Making, Vol. ‍17,
No. ‍2, March 2022, pp. 362-377

Abstract

Believers of karma believe in ethical causation where good and bad outcomes can be traced to past moral and immoral acts. Karmic belief may have important interpersonal consequences. We investigated whether American Christians expect more trustworthiness from (and are more likely to trust) interaction partners who believe in karma. We conducted an incentivized study of the trust game where interaction partners had different beliefs in karma and God. Participants expected more trustworthiness from (and were more likely to trust) karma believers. Expectations did not match actual behavior: karmic belief was not associated with actual trustworthiness. These findings suggest that people may use others' karmic belief as a cue to predict their trustworthiness but would err when doing so.

From the Discussion Section

We asked whether people perceive individuals who believe in karma, compared with those who do not, to be more trustworthy. In an incentivized study of American Christians, we found evidence that this was indeed the case. People expected interaction partners who believed in karma to behave in a more trustworthy manner and trusted these individuals more. Additionally, this tendency did not differ across the perceiver’s belief in karma.

While perceivers expected individuals who believed in karma to be more trustworthy, the individuals’ actual trustworthy behavior did not differ across their belief in karma. This discrepancy indicates that, although participants in our study used karmic belief as a cue when making trustworthiness judgment, it did not track actual trustworthiness. The absence of an association between karmic belief and actual trustworthy behavior among participants in the trustee role may seem to contradict prior research which found that reminders of karma increased generous behavior in dictator games (White et al., 2019; Willard et al., 2020). However, note that our study did not involve any conspicuous reminders of karma – there was only a single question asking if participants believe in karma. Thus, it may be that those who believe in karma would behave in a more trustworthy manner only when the concept is made salient.

Although we had found that karma believers were perceived as more trustworthy, the psychological explanation(s) for this finding remains an open question. One possible explanation is that karma is seen as a source of supernatural justice and that individuals who believe in karma are expected to behave in a more trustworthy manner in order to avoid karmic ]punishment and/or to reap karmic rewards. 


Wednesday, December 7, 2022

Corrupt third parties undermine trust and prosocial behaviour between people.

Spadaro, G., Molho, C., Van Prooijen, JW. et al.
Nat Hum Behav (2022).

Abstract

Corruption is a pervasive phenomenon that affects the quality of institutions, undermines economic growth and exacerbates inequalities around the globe. Here we tested whether perceiving representatives of institutions as corrupt undermines trust and subsequent prosocial behaviour among strangers. We developed an experimental game paradigm modelling representatives as third-party punishers to manipulate or assess corruption and examine its relationship with trust and prosociality (trust behaviour, cooperation and generosity). In a sequential dyadic die-rolling task, the participants observed the dishonest behaviour of a target who would subsequently serve as a third-party punisher in a trust game (Study 1a, N = 540), in a prisoner’s dilemma (Study 1b, N = 503) and in dictator games (Studies 2–4, N = 765, pre-registered). Across these five studies, perceiving a third party as corrupt undermined interpersonal trust and, in turn, prosocial behaviour. These findings contribute to our understanding of the critical role that representatives of institutions play in shaping cooperative relationships in modern societies.

Discussion

Considerable research in various scientific disciplines has addressed the intricate associations between the degree to which institutions are corrupt and the extent to which people trust one another and build cooperative relations. One perspective suggests that the success of institutions is rooted in interpersonal processes such as trust. Another perspective assumes a top-down process, suggesting that the functioning of institutions serves as a basis to promote and sustain interpersonal trust. However, as far as we know, this latter claim has not been tested in experimental settings.

In the present research, we provided an initial test of a top-down perspective, examining the role of a corrupt versus honest institutional representative, here operationalized as a third-party observer with the power to regulate interaction through punishment. To do so, we revisited the sequential dyadic die-rolling paradigm where the participants could learn whether the third party was corrupt or not via second-hand
learning or via first-hand experience. Across five studies (N = 1,808), we found support for the central hypothesis guiding this research: perceiving third parties as corrupt is associated with a decline in interpersonal trust, and subsequent prosocial behaviour, towards strangers. This result was robust across a broad set of economic games and designs.

Sunday, November 27, 2022

Towards a Social Psychology of Cynicism

Neumann, E., & Zaki, j. (2022, September 13).
https://doi.org/10.31234/osf.io/gjm8c

Abstract

Cynicism is the attitude that people are primarily motivated by self-interest. It tracks numerous negative outcomes, and yet many people are cynical. To understand this “cynicism paradox,” we review and call for more social psychological work on how cynicism spreads, with implications for how we might slow it down.

The Cynicism Paradox

Out of almost 8,000 respondents from 41 countries, many agree that “powerful people tend to exploit others” or that “kind-hearted people usually suffer losses”. This indicates widespread cynicism, the attitude that people are primarily motivated by self-interest, often accompanied by emotions such as contempt, anger, and distress, and antagonistic interactions with others. What explains such cynicism? Perhaps it reflects a realistic perception of the suffering caused by human self-interest. But workin social psychology regularly demonstrates that attitudes are not always perfect  mirrors of reality.  We will argue  that  people  often  overestimate self-interest,  create  it through their expectations, or overstate their own to not appear naïve. Cynicism rises when people witness self-interest, but social psychology –so far relatively quiet on the topic –can explain why they get trapped in this worldview even when it stops tracking reality.

Cynicism is related, but not reducible to, a lack of trust. Trust is often defined as accepting vulnerability based on positive expectations of others. Generalized trust implies a general tendency to  have  positive  expectations  of  others,  and  shares  with  cynicism  the  tendency  to  judge  the character of a whole group of people. But cynicism is more than reduced positive expectations.It entails a strongly negative view of human nature. The intensity of cynicism’s hostility further differentiates it from mere generalized distrust. Finally, while people can trust and distrust others’ competence,  integrity,  and  predictability,  cynicism  usually  focuses  on  judgments  of  moral character.  This  differentiates  cynicism  from  mere  pessimism,  which  encompasses  any  negative beliefs about the future, moral or non-moral alike. 


Direct applications to psychotherapy.

Wednesday, August 10, 2022

Moral Expansiveness Around the World: The Role of Societal Factors Across 36 Countries

Kirkland, K., Crimston, C. R., et al. (2022).
Social Psychological and Personality Science.
https://doi.org/10.1177/19485506221101767

Abstract

What are the things that we think matter morally, and how do societal factors influence this? To date, research has explored several individual-level and historical factors that influence the size of our ‘moral circles.' There has, however, been less attention focused on which societal factors play a role. We present the first multi-national exploration of moral expansiveness—that is, the size of people’s moral circles across countries. We found low generalized trust, greater perceptions of a breakdown in the social fabric of society, and greater perceived economic inequality were associated with smaller moral circles. Generalized trust also helped explain the effects of perceived inequality on lower levels of moral inclusiveness. Other inequality indicators (i.e., Gini coefficients) were, however, unrelated to moral expansiveness. These findings suggest that societal factors, especially those associated with generalized trust, may influence the size of our moral circles.

From the Discussion section

We found a clear link between greater generalized trust and increased moral expansiveness within-countries. Although we cannot be certain of causality, it may be that since trust is the glue that binds relationships, generalized trust may therefore be a necessary ingredient before one can care for strangers and more distant entities. Furthermore, while perceptions of breakdown within leadership (i.e., that government is ineffective and illegitimate) was not predictive of the scope of moral expansiveness, greater perceptions of breakdown in social fabric (e.g., low trust and no shared moral standards) was linked to reduced MES scores. Together this suggests that the relationships between individuals in a society relate to the size of moral circles as opposed to perceptions of those in power.

Low generalized trust was found to mediate the relationship between a higher perceived wealth gap among the rich and the poor and reduced moral expansiveness both within- and between-countries. Prior research has established that high economic inequality is related to reduced generalized trust (Oishi et al., 2011; Uslaner & Brown, 2005; Wilkinson & Pickett, 2007). This is the first work to show it may also be related to how we construct our moral world. However, experimental evidence or support from longitudinal data is needed before we can be certain about directionality. In contrast, perceptions of the breakdown in social fabric did not mediate the relationship between a higher perceived wealth gap among the rich and the poor and reduced moral expansiveness. Although a breakdown in social fabric is characterized by lower generalized trust between citizens, the social fabric concept also encompasses the perception that a shared moral standard among people is lacking (Teymoori et al., 2017). It thus appears to be the specific element of trust, rather than a breakdown in the social fabric more broadly, that mediates the relationship between the perceived wealth gap and moral expansiveness. Although we found a similar mediation effect at both levels of analysis, there was a non-significant tendency for a higher estimate of the wealth gap between countries to be related to greater moral expansiveness.

Wednesday, June 29, 2022

Abuse case reveals therapist’s dark past, raises ethical concerns

Associated Press
Originally posted 11 JUN 22

Here is an excerpt:

Dushame held a valid driver’s license despite five previous drunken driving convictions, and it was his third fatal crash — though the others didn’t involve alcohol. The Boston Globe called him “the most notorious drunk driver in New England history.”

But over time, he dedicated himself to helping people recovering from addiction, earning a master’s degree in counseling psychology and leading treatment programs from behind bars.

Two years later, he legally changed his name to Peter Stone. He was released from prison in 2002 and eventually set up shop as a licensed drug and alcohol counselor.

Last July, he was charged with five counts of aggravated felonious sexual assault under a law that criminalizes any sexual contact between patients and their therapists or health care providers. Such behavior also is prohibited by the American Psychological Association’s ethical code of conduct.

In a recent interview, the 61-year-old woman said she developed romantic feelings for Stone about six months after he began treating her for anxiety, depression and alcohol abuse in June 2013. Though he told her a relationship would be unethical, he initiated sexual contact in February 2016, she said.

“‘That crossed the line,’” the woman remembers him saying after he pulled up his pants. “‘When am I seeing you again?’”

While about half the states have no restrictions on name changes after felony convictions, 15 have bans or temporary waiting periods for those convicted of certain crimes, according to the ACLU in Illinois, which has one of the most restrictive laws.

Stone appropriately disclosed his criminal record on licensing applications and other documents, according to a review of records obtained by the AP. Disclosure to clients isn’t mandatory, said Gary Goodnough, who teaches counseling ethics at Plymouth State University. But he believes clients have a right to know about some convictions, including vehicular homicide.

Friday, June 17, 2022

Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making

S. Tolmeijer, M. Christen, et al.
In CHI Conference on Human Factors in 
Computing Systems (CHI '22), April 29-May 5,
2022, New Orleans, LA, USA. ACM

While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ reliance on AI: AI recommendations and decisions are accepted more often than the human expert's. However, AI team experts are perceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.

From the Discussion Section

Design implications for ethical AI

In sum, we find that participants had slightly higher moral trust and more responsibility ascription towards human experts, but higher capacity trust, overall trust, and reliance on AI. These different perceived capabilities could be combined in some form of human-AI collaboration. However, lack of responsibility of the AI can be a problem when AI for ethical decision making is implemented. When a human expert is involved but has less autonomy, they risk becoming a scapegoat for the decisions that the AI proposed in case of negative outcomes.

At the same time, we find that the different levels of autonomy, i.e., the human-in-the-loop and human-on-the-loop setting, did not influence the trust people had, the responsibility they assigned (both to themselves and the respective experts), and the reliance they displayed. A large part of the discussion on usage of AI has focused on control and the level of autonomy that the AI gets for different tasks. However, our results suggest that this has less of an influence, as long a human is appointed to be responsible in the end. Instead, an important focus of designing AI for ethical decision making should be on the different types of trust users show for a human vs. AI expert.

One conclusion of this finding that the control conditions of AI may be of less relevance than expected is that the focus on human-AI collaboration should be less on control and more on how the involvement of AI improves human ethical decision making. An important factor in that respect will be the time available for actual decision making: if time is short, AI advice or decisions should make clear which value was guiding in the decision process (e.g., maximizing the expected number of people to be saved irrespective of any characteristics of the individuals involved), such that the human decider can make (or evaluate) the decision in an ethically informed way. If time for deliberation is available, a AI decision support system could be designed in a way to counteract human biases in ethical decision making (e.g., point to the possibility that human deciders solely focus on utility maximization and in this way neglecting fundamental rights of individuals) such that those biases can become part of the deliberation process.

Friday, June 3, 2022

Cooperation as a signal of time preferences

Lie-Panis, J., & André, J. (2021, June 23).
https://doi.org/10.31234/osf.io/p6hc4

Abstract

Many evolutionary models explain why we cooperate with non kin, but few explain why cooperative behavior and trust vary. Here, we introduce a model of cooperation as a signal of time preferences, which addresses this variability. At equilibrium in our model, (i) future-oriented individuals are more motivated to cooperate, (ii) future-oriented populations have access to a wider range of cooperative opportunities, and (iii) spontaneous and inconspicuous cooperation reveal stronger preference for the future, and therefore inspire more trust. Our theory sheds light on the variability of cooperative behavior and trust. Since affluence tends to align with time preferences, results (i) and (ii) explain why cooperation is often associated with affluence, in surveys and field studies. Time preferences also explain why we trust others based on proxies for impulsivity, and, following result (iii), why uncalculating, subtle and one-shot cooperators are deemed particularly trustworthy. Time preferences provide a powerful and parsimonious explanatory lens, through which we can better understand the variability of trust and cooperation.

From the Discussion Section

Trust depends on revealed time preferences

Result (iii) helps explain why we infer trustworthiness from traits which appear unrelated  to cooperation,  but  happen  to  predict  time  preferences.   We  trust known partners and strangers based on how impulsive we perceive them to be (Peetz & Kammrath, 2013; Righetti & Finkenauer, 2011); impulsivity being associated to both time preferences and cooperativeness in laboratory experiments (Aguilar-Pardo et al., 2013; Burks et al., 2009; Cohen et al., 2014; Martinsson et al., 2014; Myrseth et al., 2015; Restubog et al., 2010).  Other studies show we infer cooperative motivation from a wide variety of proxies for partner self-control, including indicators of their indulgence in harmless sensual pleasures (for a review see  Fitouchi et al.,  2021),  as well as proxies for environmental affluence (Moon et al., 2018; Williams et al., 2016).

Time preferences further offer a parsimonious explanation for why different forms of cooperation inspire more trust than others.  When probability of observation p or cost-benefit ratio r/c are small in our model, helpful behavior reveals large time horizon- and cooperators may be perceived as relatively genuine or disinterested.  We derive two different types of conclusion from this principle.  (Inconspicuous and/or spontaneous cooperation)

Thursday, May 19, 2022

“Google Told Me So!” On the Bent Testimony of Search Engine Algorithms.

Narayanan, D., De Cremer, D.
Philos. Technol. 35, 22 (2022).
https://doi.org/10.1007/s13347-022-00521-7

Abstract

Search engines are important contemporary sources of information and contribute to shaping our beliefs about the world. Each time they are consulted, various algorithms filter and order content to show us relevant results for the inputted search query. Because these search engines are frequently and widely consulted, it is necessary to have a clear understanding of the distinctively epistemic role that these algorithms play in the background of our online experiences. To aid in such understanding, this paper argues that search engine algorithms are providers of “bent testimony”—that, within certain contexts of interactions, users act as if these algorithms provide us with testimony—and acquire or alter beliefs on that basis. Specifically, we treat search engine algorithms as if they were asserting as true the content ordered at the top of a search results page—which has interesting parallels with how we might treat an ordinary testifier. As such, existing discussions in the philosophy of testimony can help us better understand and, in turn, improve our interactions with search engines. By explicating the mechanisms by which we come to accept this “bent testimony,” our paper discusses methods to help us control our epistemic reliance on search engine algorithms and clarifies the normative expectations one ought to place on the search engines that deploy these algorithms.

Conclusion 

We have argued here that search engine algorithms provide us with a kind of testimony when they bring to fore some pieces of content for us to engage with and push behind others. This testimony is “bent,” because: 

(1) We treat these algorithms as if they are recommending to us the content that they feature at the top of a search results list, trusting that this content is more likely to contain true claims.

(2) There are disputed norms of communication about whether the recommendation of a piece of content counts as an assertion of its claims.

An understanding of this mechanism of bent testimony shows us how to control our reliance on it, if we so desired. Decreasing our reliance on this bent testimony entails decreasing our credence in the belief that the content ordered at the top of a search engine is any likelier to contain true claims. Further, we have argued that we ought to treat search engines as if they were testifiers. By having comparable expectations between search engines and ordinary testifiers, we would be able to pursue policy and legal interventions that befit the outsized role that these search engines seem to play when we acquire beliefs online.

Monday, December 6, 2021

Overtrusting robots: Setting a research agenda to mitigate overtrust in automation

Aroyo, A.M.,  et al. (2021).
Journal of Behavioral Robotics,
Vol. 12, no. 1, pp. 423-436. 

Abstract

There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human–robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI.

From the Conclusion

In light of the increasing use of automated systems, both embodied and disembodied, overtrust is becoming an ever more important topic. However, our overview shows how the overtrust literature has so far been mostly confined to HRI research and psychological approaches. While philosophers, ethicists, engineers, lawyers, and social scientists more generally have a lot to say about trust and technology, conceptual clarity and a shared framework for approaching overtrust are missing. In this article, our goal was not to provide an overarching framework but rather to encourage further dialogue from an interdisciplinary perspective, integrating diverse literature and providing a ground for common understanding. 

Friday, August 13, 2021

Moral dilemmas and trust in leaders during a global health crisis

Everett, J.A.C., Colombatto, C., Awad, E. et al. 
Nat Hum Behav (2021). 

Abstract

Trust in leaders is central to citizen compliance with public policies. One potential determinant of trust is how leaders resolve conflicts between utilitarian and non-utilitarian ethical principles in moral dilemmas. Past research suggests that utilitarian responses to dilemmas can both erode and enhance trust in leaders: sacrificing some people to save many others (‘instrumental harm’) reduces trust, while maximizing the welfare of everyone equally (‘impartial beneficence’) may increase trust. In a multi-site experiment spanning 22 countries on six continents, participants (N = 23,929) completed self-report (N = 17,591) and behavioural (N = 12,638) measures of trust in leaders who endorsed utilitarian or non-utilitarian principles in dilemmas concerning the COVID-19 pandemic. Across both the self-report and behavioural measures, endorsement of instrumental harm decreased trust, while endorsement of impartial beneficence increased trust. These results show how support for different ethical principles can impact trust in leaders, and inform effective public communication during times of global crisis.

Discussion

The COVID-19 pandemic has raised a number of moral dilemmas that engender conflicts between utilitarian and non-utilitarian ethical principles. Building on past work on utilitarianism and trust, we tested the hypothesis that endorsement of utilitarian solutions to pandemic dilemmas would impact trust in leaders. Specifically, in line with suggestions from previous work and case studies of public communications during the early stages of the pandemic, we predicted that endorsing instrumental harm would decrease trust in leaders, while endorsing impartial beneficence would increase trust.

Thursday, May 13, 2021

Technology and the Value of Trust: Can we trust technology? Should we?

John Danaher
Philosophical Disquisitions
Originally published 30 Mar 21

Can we trust technology? Should we try to make technology, particularly AI, more trustworthy? These are questions that have perplexed philosophers and policy-makers in recent years. The EU’s High Level Expert Group on AI has, for example, recommended that the primary goal of AI policy and law in the EU should be to make the technology more trustworthy. But some philosophers have critiqued this goal as being borderline incoherent. You cannot trust AI, they say, because AI is just a thing. Trust can exist between people and other people, not between people and things.

This is an old debate. Trust is a central value in human society. The fact that I can trust my partner not to betray me is one of the things that makes our relationship workable and meaningful. The fact that I can trust my neighbours not to kill me is one of the things that allows me to sleep at night. Indeed, so implicit is this trust that I rarely think about it. It is one of the background conditions that makes other things in my life possible. Still, it is true that when I think about trust, and when I think about what it is that makes trust valuable, I usually think about trust in my relationships with other people, not my relationships with things.

But would it be so terrible to talk about trust in technology? Should we use some other term instead such as ‘reliable’ or ‘confidence-inspiring’? Or should we, as some blockchain enthusiasts have argued, use technology to create a ‘post-trust’ system of social governance?

I want to offer some quick thoughts on these questions in this article. I will do so in three stages. First, I will briefly review some of the philosophical debates about trust in people and trust in things. Second, I will consider the value of trust, distinguishing between its intrinsic and extrinsic components. Third, I will suggest that it is meaningful to talk about trust in technology, but that the kind of trust we have in technology has a different value to the kind of trust we have in other people. Finally, I will argue that most talk about building ‘trustworthy’ technology is misleading: the goal of most of these policies is to obviate or override the need for trust.

Tuesday, April 20, 2021

State Medical Board Recommendations for Stronger Approaches to Sexual Misconduct by Physicians

King PA, Chaudhry HJ, Staz ML. 
JAMA. 
Published online March 29, 2021. 
doi:10.1001/jama.2020.25775

The Federation of State Medical Boards (FSMB) recently engaged with its member boards and investigators, trauma experts, physicians, resident physicians, medical students, survivors of physician abuse, and the public to critically review practices related to the handling of reports of sexual misconduct (including harassment and abuse) toward patients by physicians. The review was undertaken as part of a core responsibility of boards to protect the public and motivated by concerning reports of unacceptable behavior by physicians. Specific recommendations from the review were adopted by the FSMB’s House of Delegates on May 2, 2020, and are highlighted in this Viewpoint.

Sexual misconduct by physicians exists along a spectrum of severity that may begin with “grooming” behaviors and end with sexual assault. Behaviors at any point on this spectrum should be of concern because unreported minor violations (including sexually suggestive comments or inappropriate physical contact) may lead to greater misconduct. In 2018, the National Academies of Science, Engineering, and Medicine identified sexual harassment as an important problem in scientific communities and medicine, finding that greater than 50% of women faculty and staff and 20% to 50% of women students reportedly have encountered or experienced sexually harassing conduct in academia. Data from state medical boards indicate that 251 disciplinary actions were taken against physicians in 2019 for “sexual misconduct” violations (Table). The actual number may be higher because boards often use a variety of terms, including unprofessional conduct, physician-patient boundary issues, or moral unfitness, to describe such actions. The FSMB has begun a project to encourage boards to align their categorization of all disciplinary actions to better understand the scope of misconduct.

Saturday, April 10, 2021

Ethical and Professionalism Implications of Physician Employment and Health Care Business Practices

De Camp, M, & Sulmasy, L. S.
Annals of Internal Medicine
Position Paper: 16 March 21

Abstract

The environment in which physicians practice and patients receive care continues to change. Increasing employment of physicians, changing practice models, new regulatory requirements, and market dynamics all affect medical practice; some changes may also place greater emphasis on the business of medicine. Fundamental ethical principles and professional values about the patient–physician relationship, the primacy of patient welfare over self-interest, and the role of medicine as a moral community and learned profession need to be applied to the changing environment, and physicians must consider the effect the practice environment has on their ethical and professional responsibilities. Recognizing that all health care delivery arrangements come with advantages, disadvantages, and salient questions for ethics and professionalism, this American College of Physicians policy paper examines the ethical implications of issues that are particularly relevant today, including incentives in the shift to value-based care, physician contract clauses that affect care, private equity ownership, clinical priority setting, and physician leadership. Physicians should take the lead in helping to ensure that relationships and practices are structured to explicitly recognize and support the commitments of the physician and the profession of medicine to patients and patient care.

Here is an excerpt:

Employment of physicians likewise has advantages, such as financial stability, practice management assistance, and opportunities for collaboration and continuing education, but there is also the potential for dual loyalty when physicians try to be accountable to both their patients and their employers. Dual loyalty is not new; for example, mandatory reporting of communicable diseases may place societal interests in preventing disease at odds with patient privacy interests. However, the ethics of everyday business models and practices in medicine has been less explored.

Trust is the foundation of the patient–physician relationship. Trust, honesty, fairness, and respect among health care stakeholders support the delivery of high-value, patient-centered care. Trust depends on expertise, competence, honesty, transparency, and intentions or goodwill. Institutions, systems, payers, purchasers, clinicians, and patients should recognize and support “the intimacy and importance of patient–clinician relationships” and the ethical duties of physicians, including the primary obligation to act in the patient's best interests (beneficence).

Business ethics does not necessarily conflict with the ethos of medicine. Today, physician leadership of health care organizations may be vital for delivering high-quality care and building trust, including in health care institutions. Truly trustworthy institutions may be more successful (in patient care and financially) in the long term.

Blanket statements about business practices and contractual provisions are unhelpful; most have both potential positives and potential negatives. Nevertheless, it is important to raise awareness of business practices relevant to ethics and professionalism in medical practice and promote the physician's ability to advocate for arrangements that align with medicine's core values. In this paper, the American College of Physicians (ACP) highlights 6 contemporary issues and offers ethical guidance for physicians. Although the observed trends toward physician employment and organizational consolidation merit reflection, certain issues may also resonate with independent practices and in other practice settings.

Wednesday, April 7, 2021

Actionable Principles for Artificial Intelligence Policy: Three Pathways

Stix, C. 
Sci Eng Ethics 27, 15 (2021). 
https://doi.org/10.1007/s11948-020-00277-3

Abstract

In the development of governmental policy for artificial intelligence (AI) that is informed by ethics, one avenue currently pursued is that of drawing on “AI Ethics Principles”. However, these AI Ethics Principles often fail to be actioned in governmental policy. This paper proposes a novel framework for the development of ‘Actionable Principles for AI’. The approach acknowledges the relevance of AI Ethics Principles and homes in on methodological elements to increase their practical implementability in policy processes. As a case study, elements are extracted from the development process of the Ethics Guidelines for Trustworthy AI of the European Commission’s “High Level Expert Group on AI”. Subsequently, these elements are expanded on and evaluated in light of their ability to contribute to a prototype framework for the development of 'Actionable Principles for AI'. The paper proposes the following three propositions for the formation of such a prototype framework: (1) preliminary landscape assessments; (2) multi-stakeholder participation and cross-sectoral feedback; and, (3) mechanisms to support implementation and operationalizability.

(cut)

Actionable Principles

In many areas, including AI, it has proven challenging to bridge ethics and governmental policy-making (Müller 2020, 1.3). To be clear, many AI Ethics Principles, such as those developed by industry actors or researchers for self-governance purposes, are not aimed at directly informing governmental policy-making, and therefore the challenge of bridging this gulf may not apply. Nonetheless, a significant subset of AI Ethics Principles are addressed to governmental actors, from the 2019 OECD Principles on AI (OECD 2019) to the US Defence Innovation Board’s AI Principles adopted by the Department of Defence (DIB 2019). Without focussing on any single effort in particular, the aggregate success of many AI Ethics Principles remains limited (Rességuier and Rodriques 2020). Clear shifts in governmental policy which can be directly traced back to preceding and corresponding sets of AI Ethics Principles, remain few and far between. This could mean, for example, concrete textual references reflecting a specific section of the AI Ethics Principle, or the establishment of (both enabling or preventative) policy actions building on relevant recommendations. A charitable interpretation could be that as governmental policy-making takes time, and given that the vast majority of AI Ethics Principles were published within the last two years, it may simply be premature to gauge (or dismiss) their impact. However, another interpretation could be that the current versions of AI Ethics Principles have fallen short of their promise, and reached their limitation for impact in governmental policy-making (henceforth: policy).

It is worth noting that successful actionability in policy goes well beyond AI Ethics Principles acting as a reference point. Actionable Principles could shape policy by influencing funding decisions, taxation, public education measures or social security programs. Concretely, this could mean increased funding into societally relevant areas, education programs to raise public awareness and increase vigilance, or to rethink retirement structures with regard to increased automation. To be sure, actionability in policy does not preclude impact in other adjacent domains, such as influencing codes of conduct for practitioners, clarifying what demands workers and unions should pose, or shaping consumer behaviour. Moreover, during political shifts or in response to a crisis, Actionable Principles may often prove to be the only (even if suboptimal) available governance tool to quickly inform precautionary and remedial (legal and) policy measures.