Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, December 1, 2021

‘Yeah, we’re spooked’: AI starting to have big real-world impact

Nicola K. Davis
The Guardian
Originally posted 29 OCT 21

Here is an excerpt:

One concern is that a machine would not need to be more intelligent than humans in all things to pose a serious risk. “It’s something that’s unfolding now,” he said. “If you look at social media and the algorithms that choose what people read and watch, they have a huge amount of control over our cognitive input.”

The upshot, he said, is that the algorithms manipulate the user, brainwashing them so that their behaviour becomes more predictable when it comes to what they chose to engage with, boosting click-based revenue.

Have AI researchers become spooked by their own success? “Yeah, I think we are increasingly spooked,” Russell said.

“It reminds me a little bit of what happened in physics where the physicists knew that atomic energy existed, they could measure the masses of different atoms, and they could figure out how much energy could be released if you could do the conversion between different types of atoms,” he said, noting that the experts always stressed the idea was theoretical. “And then it happened and they weren’t ready for it.”

The use of AI in military applications – such as small anti-personnel weapons – is of particular concern, he said. “Those are the ones that are very easily scalable, meaning you could put a million of them in a single truck and you could open the back and off they go and wipe out a whole city,” said Russell.

Russell believes the future for AI lies in developing machines that know the true objective is uncertain, as are our preferences, meaning they must check in with humans – rather like a butler – on any decision. But the idea is complex, not least because different people have different – and sometimes conflicting – preferences, and those preferences are not fixed.

Russell called for measures including a code of conduct for researchers, legislation and treaties to ensure the safety of AI systems in use, and training of researchers to ensure AI is not susceptible to problems such as racial bias. He said EU legislation that would ban impersonation of humans by machines should be adopted around the world.

Tuesday, November 30, 2021

Community standards of deception: Deception is perceived to be ethical when it prevents unnecessary harm

Levine, E. E. (2021). 
Journal of Experimental Psychology: 
General. Advance online publication. 
https://doi.org/10.1037/xge0001081

Abstract

We frequently claim that lying is wrong, despite modeling that it is often right. The present research sheds light on this tension by unearthing systematic cases in which people believe lying is ethical in everyday communication and by proposing and testing a theory to explain these cases. Using both inductive and experimental approaches, the present research finds that deception is perceived to be ethical and individuals want to be deceived when deception is perceived to prevent unnecessary harm. This research identifies eight community standards of deception: rules of deception that most people abide by and recognize once articulated, but have never previously been codified. These standards clarify systematic circumstances in which deception is perceived to prevent unnecessary harm, and therefore, circumstances in which deception is perceived to be ethical. This work also documents how perceptions of unnecessary harm influence the use and judgment of deception in everyday life, above and beyond other moral concerns. These findings provide insight into when and why people value honesty and paves the way for future research on when and why people embrace deception. 

From the Discussion

First, this work illuminates how people fundamentally think about deception. Specifically, this work identifies systematic circumstances in which deception is seen as more ethical than honesty, and it provides an organizing framework for understanding these circumstances. A large body of research identifies features of lies that make them seem more or less justifiable and therefore, that lead people to tell greater or fewer lies (e.g., Effron, 2018; Rogers, Zeckhauser, Gino, Norton, & Schweitzer, 2017; Shalvi, Dana, Handgraaf, & De Dreu, 2011). However, little research addresses whether people, upon, introspection, ever actually believe it is right to tell lies; that is, whether lying is ever a morally superior strategy to truth-telling. The present research finds that people believe lying is the right thing to do when it prevents unnecessary harm. Notably, this finding reveals that lay people seem to have a relatively pragmatic view of deception and honesty. Rather than believing deception is a categorical vice – for example, because it damages social trust (Bok 1978; Kant, 1949) or undermines autonomy (Bacon, 1872; Harris, 2011; Kant, 1959/1785) - people seem to conceptualize deception as a tactic that can and should be used to regulate another vice: harm.

Although this view of deception runs counter to prevailing normative claims and much of the existing scholarship in psychology and economics, which paints deception as generally unethical, it is important to note that this idea – that deception is and should be used pragmatically - is not novel. In fact, many of the rules of deception identified in the present research are alluded to in other philosophical, religious, and practical discussions of deception (see Table 2 for a review). Until now, however, these ideas have been siloed in disparate literatures, and behavioral scientists have lacked a parsimonious framework for understanding why individuals endorse deception in various circumstances. The present research identifies a common psychology that explains a number of seemingly unrelated “exceptions” to the norm of honesty, thereby unifying findings and arguments across psychology, religion, and philosophy under a common theoretical framework.

Monday, November 29, 2021

People use mental shortcuts to make difficult decisions – even highly trained doctors delivering babies

Manasvini Singh
The Conversation
Originally published 14 OCT 21

Here is an excerpt:

Useful time-saver or dangerous bias?

A bias arising from a heuristic implies a deviation from an “optimal” decision. However, identifying the optimal decision in real life is difficult because you usually don’t know what could have been: the counterfactual. This is especially relevant in medicine.

Take the win-stay/lose-shift strategy, for example. There are other studies that show that after “bad” events, physicians switch strategies. Missing an important diagnosis makes physicians test more on subsequent patients. Experiencing complications with a drug makes the physician less likely to prescribe it again.

But from a learning perspective, it’s difficult to say that ordering a test after missing a diagnosis is a flawed heuristic. Ordering a test always increases the chance that the physician catches an important diagnosis. So it’s a useful heuristic in some instances – say, for example, the physician had been underordering tests before, or the patient or insurer prefers shelling out the extra money for the chance to detect a cancer early.

In my study, though, switching delivery modes after complications offers no documented guarantees of avoiding future complications. And there is the added consideration of the short- and long-term health consequences of delivery-mode choice for mother and baby. Further, people are generally less tolerant of having inappropriate medical procedures performed on them than they are of being the recipients of unnecessary tests.

Tweaking the heuristic

Can physicians’ reliance on heuristics be lessened? Possibly.

Decision support systems that assist physicians with important clinical decisions are gathering momentum in medicine, and could help doctors course-correct after emotional events such as delivery complications.

For example, such algorithms can be built into electronic health records and perform a variety of tasks: flag physician decisions that appear nonstandard, identify patients who could benefit from a particular decision, summarize clinical information in ways that make it easier for physicians to digest and so on. As long as physicians retain at least some autonomy, decision support systems can do just that – support doctors in making clinical decisions.

Nudges that unobtrusively encourage physicians to make certain decisions can be accomplished by tinkering with the way options are presented – what’s called “choice architecture.” They already work for other clinical decisions.

Sunday, November 28, 2021

Attitude Moralization Within Polarized Contexts: An Emotional Value-Protective Response to Dyadic Harm Cues

D’Amore, C., van Zomeren, M., & Koudenburg, N. 
(2021). Personality and Social Psychology Bulletin.
https://doi.org/10.1177/01461672211047375

Abstract

Polarization about societal issues involves attitudinal conflict, but we know little about how such conflict transforms into moral conflict. Integrating insights on polarization and psychological value protection, we propose a model that predicts when and how attitude moralization (i.e., when attitudes become grounded in core values) may be triggered and develops within polarized contexts. We tested this model in three experiments (total N = 823) in the context of the polarized Zwarte Piet (blackface) debate in the Netherlands. Specifically, we tested the hypotheses that (a) situational cues to dyadic harm in this context (i.e., an outgroup that is perceived as intentionally inflicting harm onto innocent victims) trigger individuals to moralize their relevant attitude, because of (b) emotional value-protective responses. Findings supported both hypotheses across different regional contexts, suggesting that attitude moralization can emerge within polarized contexts when people are exposed to actions by attitudinal opponents perceived as causing dyadic harm.

From the Discussion Section

Harm as dyadic

First, our findings suggest that a focus on dyadic harm may be key to understanding triggers for attitude moralization within polarized contexts. Although most researchers have assigned the general concept of harm a central role in theory on moral judgments (e.g., Kohlberg, 1969; Piaget, 1965; Rozin & Singh, 1999; Turiel, 2006), no previous research on moralization has specifically focused on the dyadic element of harm within polarized contexts. The few empirical studies that examined the role of harm as a general (utilitarian) predictor in the process of attitude moralization about a polarized issue (Brandt et al., 2015; Wisneski & Skitka, 2017) did not find clear support for its predictive power. Interestingly, our consistent finding that strong cues to dyadic harm served as a situational trigger for attitude moralization adds to this literature by suggesting that for understanding moralization triggers within polarized contexts, it is important to understand when people perceive harm as more dyadic (in this case, when a concrete outgroup is perceived as intentionally harming innocent [ingroup] victims). Indeed, we suggest that, in polarized contexts at least, harm could trigger attitude moralization when it is perceived to be dyadic—that is, intentionally harmful. This implies that researchers interested in predicting attitude moralization within polarized contexts should consider conceptualizing and measuring harm as dyadic.

Saturday, November 27, 2021

Hate and meaning in life: How collective, but not personal, hate quells threat and spurs meaning in life

A. Elnakouri, C. Hubley, & I. McGregor
Journal of Experimental Social Psychology
Volume 98, January 2022,

Abstract

Classic and contemporary perspectives link meaning in life to the pursuit of a significant purpose, free from incoherence. The typical assumption is that these meaningful purposes are prosocial, or at least benign. Here, we tested whether hate might also bolster meaning in life, via motivational states underlying significant purpose and coherence. In two studies (N = 847; Study 2 pre-registered), describing hatred (vs. mere dislike) towards collective entities (societal phenomena, institutions, groups), but not individuals, heightened feelings linked to the behavioral approach system (BAS; eagerness, determination, enthusiasm), which underlies a sense of significant purpose, and muted feelings linked to threat and the behavioral inhibition system (BIS; confused, uncertain, conflicted), which underlies a sense of incoherence. This high BAS and low BIS, in turn, predicted meaning in life beyond pre-manipulation levels. Exploratory analyses suggested that personal hatreds did not have the meaning-bolstering effects that collective hatreds had due to meaning-dampening negative feelings. Discussion focuses on motivation for collective and ideological hatreds in threatening circumstances.

Conclusion 

Classic and contemporary  theories in psychology  and beyond pro-pose that various threats can cause zealous responses linked to collective hate  (Arendt,  1951;  Freud,  1937;  Jonas  et  al.,  2014).  The  present research offers one reason behind the appeal of collective hate in such circumstances: it’s ability to spur meaning in life. Shielded from the negativity of personal hate, collective forms of hate can mute threat and BIS-related  feelings,  boost  BAS-related  feelings,  thereby  fostering meaning in life. This research therefore helps us better understand the motivational drivers of hate and why it is an ever-present feature of the human condition.

Friday, November 26, 2021

Paranoia, self-deception and overconfidence

Rossi-Goldthorpe RA, et al (2021) 
PLoS Comput Biol 17(10): e1009453. 
https://doi.org/10.1371/journal.pcbi.1009453

Abstract

Self-deception, paranoia, and overconfidence involve misbeliefs about the self, others, and world. They are often considered mistaken. Here we explore whether they might be adaptive, and further, whether they might be explicable in Bayesian terms. We administered a difficult perceptual judgment task with and without social influence (suggestions from a cooperating or competing partner). Crucially, the social influence was uninformative. We found that participants heeded the suggestions most under the most uncertain conditions and that they did so with high confidence, particularly if they were more paranoid. Model fitting to participant behavior revealed that their prior beliefs changed depending on whether the partner was a collaborator or competitor, however, those beliefs did not differ as a function of paranoia. Instead, paranoia, self-deception, and overconfidence were associated with participants’ perceived instability of their own performance. These data are consistent with the idea that self-deception, paranoia, and overconfidence flourish under uncertainty, and have their roots in low self-esteem, rather than excessive social concern. The model suggests that spurious beliefs can have value–self-deception is irrational yet can facilitate optimal behavior. This occurs even at the expense of monetary rewards, perhaps explaining why self-deception and paranoia contribute to costly decisions which can spark financial crashes and devastating wars.

Author summary

Paranoia is the belief that others intend to harm you. Some people think that paranoia evolved to serve a collational function and should thus be related to the mechanisms of group membership and reputation management. Others have argued that its roots are much more basic, being based instead in how the individual models and anticipates their world–even non-social things. To adjudicate we gave participants a difficult perceptual decision-making task, during which they received advice on what to decide from a partner, who was either a collaborator (in their group) or a competitor (outside of their group). Using computational modeling of participant choices which allowed us to estimate the role of social and non-social processes in the decision, we found that the manipulation worked: people placed a stronger prior weight on the advice from a collaborator compared to a competitor. However, paranoia did not interact with this effect. Instead, paranoia was associated with participants’ beliefs about their own performance. When those beliefs were poor, paranoid participants relied heavily on the advice, even when it contradicted the evidence. Thus, we find a mechanistic link between paranoia, self-deception, and over confidence.

Thursday, November 25, 2021

APF Gold Medal Award for Life Achievement in the Practice of Psychology: Samuel Knapp

American Psychologist, 76(5), 812–814. 

This award recognizes a distinguished career and enduring contribution to the practice of psychology. Samuel Knapp’s long, distinguished career has resulted in demonstrable effects and significant contributions to best practices in professionalism, ethics education, positive ethics, and legislative advocacy as Director of Professional Affairs for the Pennsylvania Psychological Association and as an ethics educator extraordinaire. Dr. Knapp’s work has modified the way psychologists think about professional ethics through education, from avoiding disciplinary consequences to promoting overarching ethical principles to achieve the highest standards of ethical behavior. His focus on respectful collaboration among psychologists promotes honesty through nonjudgmental conversations. His Ethics Educators Workshop and other continuing education programs have brought together psychology practitioners and faculty to focus deeply on ethics and resulted in the development of the APA Ethics Educators Award.

From the Biography section

Ethics education became especially important in Pennsylvania when the Pennsylvania State Board of Psychology mandated ethics as part of its continuing education requirement. But even before that, members of the PPA Ethics Committee and Board of Directors, saw ethics education as a vehicle to help psychologists to improve the quality of their services to their patients. Also, to the extent that ethics education can help promote good decision-making, it could also reduce the emotional burden that professional psychologists often feel when faced with difficult ethical situations. Often the continuing education programs were interactive with the secondary goals of helping psychologists to build contacts with each other and an opportunity for the presenters to promote authentic and compassion-driven approaches to teaching ethics. Yes, Sam and the other PPA Ethics educators, such as the PPA attorney Rachael Baturin, also taught the laws, ethics codes, and the risk management strategies. facts.  But these were only one component of PPA’s ethics education program. More important was the development of a cadre of psychologists/ethicists who taught most of these continuing education programs.


Wednesday, November 24, 2021

Moral masters or moral apprentices? A connectionist account of sociomoral evaluation in preverbal infants

Benton, D. T., & Lapan, C. 
(2021, February 21). 
https://doi.org/10.31234/osf.io/mnh35

Abstract

Numerous studies suggest that preverbal infants possess the ability to make sociomoral judgements and demonstrate a preference for prosocial agents. Some theorists argue that infants possess an “innate moral core” that guides their sociomoral reasoning. However, we propose that infants’ capacity for putative sociomoral evaluation and reasoning can just as likely be driven by a domain-general associative-learning mechanism that is sensitive to agent action. We implement this theoretical account in a connectionist computational model and show that it can account for the pattern of results in Hamlin et al. (2007), Hamlin and Wynn (2011), Hamlin (2013), and Hamlin, Wynn, Bloom, and Mahajan (2011). These are pioneering studies in this area and were among the first to examine sociomoral evaluation in preverbal infants. Based on the results of 5 computer simulations, we suggest that an associative-learning mechanism—instantiated as a computational (connectionist) model—can account for previous findings on preverbal infants’ capacity for sociomoral evaluation. These results suggest that an innate moral core may not be necessary to account for sociomoral evaluation in infants.

From the General Discussion

The simulations suggest the preverbal infants’ reliable choice of helpers over hinderers inHamlin et al. (2007), Hamlin and Wynn (2011), Hamlin (2013), and Hamlin et al. (2011) could have been based on extensive real-world experience with various kinds of actions (e.g., concordant action and discordant action) and an expectation—based on a learned second-order correlation—that agents that engage in certain kinds of actions (e.g., concordant action) have the capacity for interaction, whereas agents that engage in certain kinds of other actions (e.g., discordant action) either do not have the capacity for interaction or have less of a capacity for it. 

Broadly, these results are consistent with work by Powell and Spelke (2018). They found that 4- to 5½-month-old infants looked longer at characters that engaged in concordant (i.e., imitative) action with other characters than characters that engaged in discordant (i.e., non-imitative) action with other characters (Exps. 1 and 2). Specifically, infants looked longer at characters that engaged in the same jumping motion and made the same sound as a target character than those characters that engaged in the same jumping motion but made a different sound than the target character. Our results are also consistent with their finding—which was based on a conceptual replication of Hamlin et al. (2007)—that 12-month-olds reliably reached for a character that engaged in concordant (i.e., imitative) action with the climber than a character that engaged in discordant (i.e., non-imitative) action with it (Exp. 4), even when those actions were non-social. 

Tuesday, November 23, 2021

The Moral Identity Picture Scale (MIPS): Measuring the Full Scope of Moral Identity

Amelia Goranson, Connor O’Fallon, & Kurt Gray
Research Paper, in press

Abstract

Morality is core to people’s identity. Existing moral identity scales measure good/moral vs. bad/immoral, but the Theory of Dyadic Morality highlights two-dimensions of morality: valence (good/moral vs. bad/immoral) and agency (high/agent vs. low/recipient). The Moral Identity Picture Scale (MIPS) measures this full space through 16 vivid pictures. Participants receive scores for each of four moral roles: hero, villain, victim, and beneficiary. The MIPS can also provide summary scores for good, evil, agent, and patient, and possesses test-retest reliability and convergent/divergent validity. Self-identified heroes are more empathic and higher in locus of control, villains are less agreeable and higher in narcissism, victims are higher in depression and lower in self-efficacy, and beneficiaries are lower in Machiavellianism. Although people generally see themselves as heroes, comparisons across known-groups reveals relative differences: Duke MBA students self-identify more as villains, UNC social work students self identify more as heroes, and workplace bullying victims self-identify more as victims. Data also reveals that the beneficiary role is ill-defined, collapsing the two-dimensional space of moral identity into a triangle anchored by hero, villain, and victim.

From the Discussion

We hope that, in providing this new measure of moral identity, future work can examine a broader sense of the moral world—beyond simple identifications of good vs. evil—using our expanded measure that captures not only valence but also role as a moral agent or patient. This measure expands upon previous measures related to moral identity (e.g., Aquino & Reed, 2002; Barriga et al., 2001; Reimer & Wade-Stein, 2004), replicating prior work that we divide the moral world up into good and evil, but demonstrating that the moral identification space includes another component as well: moral agency and moral patiency. Most past work has examined this “agent” side of moral identity—heroes and villains—but we can gain a fuller and more nuanced view of the moral world if we also examine their counterparts—moral patients/recipients. The MIPS provides us with the ability to examine moral identity across these 2 dimensions of valence (positive vs. negative) and agency (agent vs. patient).