Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, December 31, 2022

AI’s fairness problem: understanding wrongful discrimination in the context of automated decision-making

Cossette-Lefebvre, H., Maclure, J. 
AI Ethics (2022).
https://doi.org/10.1007/s43681-022-00233-w

Abstract

The use of predictive machine learning algorithms is increasingly common to guide or even take decisions in both public and private settings. Their use is touted by some as a potentially useful method to avoid discriminatory decisions since they are, allegedly, neutral, objective, and can be evaluated in ways no human decisions can. By (fully or partly) outsourcing a decision process to an algorithm, it should allow human organizations to clearly define the parameters of the decision and to, in principle, remove human biases. Yet, in practice, the use of algorithms can still be the source of wrongful discriminatory decisions based on at least three of their features: the data-mining process and the categorizations they rely on can reconduct human biases, their automaticity and predictive design can lead them to rely on wrongful generalizations, and their opaque nature is at odds with democratic requirements. We highlight that the two latter aspects of algorithms and their significance for discrimination are too often overlooked in contemporary literature. Though these problems are not all insurmountable, we argue that it is necessary to clearly define the conditions under which a machine learning decision tool can be used. We identify and propose three main guidelines to properly constrain the deployment of machine learning algorithms in society: algorithms should be vetted to ensure that they do not unduly affect historically marginalized groups; they should not systematically override or replace human decision-making processes; and the decision reached using an algorithm should always be explainable and justifiable.

From the Conclusion

Given that ML algorithms are potentially harmful because they can compound and reproduce social inequalities, and that they rely on generalization disregarding individual autonomy, then their use should be strictly regulated. Yet, these potential problems do not necessarily entail that ML algorithms should never be used, at least from the perspective of anti-discrimination law. Rather, these points lead to the conclusion that their use should be carefully and strictly regulated. However, before identifying the principles which could guide regulation, it is important to highlight two things. First, the context and potential impact associated with the use of a particular algorithm should be considered. Anti-discrimination laws do not aim to protect from any instances of differential treatment or impact, but rather to protect and balance the rights of implicated parties when they conflict [18, 19]. Hence, using ML algorithms in situations where no rights are threatened would presumably be either acceptable or, at least, beyond the purview of anti-discriminatory regulations.

Second, one also needs to take into account how the algorithm is used and what place it occupies in the decision-making process. More precisely, it is clear from what was argued above that fully automated decisions, where a ML algorithm makes decisions with minimal or no human intervention in ethically high stakes situation—i.e., where individual rights are potentially threatened—are presumably illegitimate because they fail to treat individuals as separate and unique moral agents. To avoid objectionable generalization and to respect our democratic obligations towards each other, a human agent should make the final decision—in a meaningful way which goes beyond rubber-stamping—or a human agent should at least be in position to explain and justify the decision if a person affected by it asks for a revision. If this does not necessarily preclude the use of ML algorithms, it suggests that their use should be inscribed in a larger, human-centric, democratic process.

Friday, December 30, 2022

A new control problem? Humanoid robots, artificial intelligence, and the value of control

Nyholm, S. 
AI Ethics (2022).
https://doi.org/10.1007/s43681-022-00231-y

Abstract

The control problem related to robots and AI usually discussed is that we might lose control over advanced technologies. When authors like Nick Bostrom and Stuart Russell discuss this control problem, they write in a way that suggests that having as much control as possible is good while losing control is bad. In life in general, however, not all forms of control are unambiguously positive and unproblematic. Some forms—e.g. control over other persons—are ethically problematic. Other forms of control are positive, and perhaps even intrinsically good. For example, one form of control that many philosophers have argued is intrinsically good and a virtue is self-control. In this paper, I relate these questions about control and its value to different forms of robots and AI more generally. I argue that the more robots are made to resemble human beings, the more problematic it becomes—at least symbolically speaking—to want to exercise full control over these robots. After all, it is unethical for one human being to want to fully control another human being. Accordingly, it might be seen as problematic—viz. as representing something intrinsically bad—to want to create humanoid robots that we exercise complete control over. In contrast, if there are forms of AI such that control over them can be seen as a form of self-control, then this might be seen as a virtuous form of control. The “new control problem”, as I call it, is the question of under what circumstances retaining and exercising complete control over robots and AI is unambiguously ethically good.

From the Concluding Discussion section

Self-control is often valued as good in itself or as an aspect of things that are good in themselves, such as virtue, personal autonomy, and human dignity. In contrast, control over other persons is often seen as wrong and bad in itself. This means, I have argued, that if control over AI can sometimes be seen or conceptualized as a form of self-control, then control over AI can sometimes be not only instrumentally good, but in certain respects also good as an end in itself. It can be a form of extended self-control, and therefore a form of virtue, personal autonomy, or even human dignity.

In contrast, if there will ever be any AI systems that could properly be regarded as moral persons, then it would be ethically problematic to wish to be in full control over them, since it is ethically problematic to want to be in complete control over a moral person. But even before that, it might still be morally problematic to want to be in complete control over certain AI systems; it might be problematic if they are designed to look and behave like human beings. There can be, I have suggested, something symbolically problematic about wanting to be in complete control over an entity that symbolizes or represents something—viz. a human being—that it would be morally wrong and in itself bad to try to completely control.

For these reasons, I suggest that it will usually be a better idea to try to develop AI systems that can sensibly be interpreted as extensions of our own agency while avoiding developing robots that can be, imitate, or represent moral persons. One might ask, though, whether the two possibilities can ever come together, so to speak.

Think, for example, of the robotic copy that the Japanese robotics researcher Hiroshi Ishiguro has created of himself. It is an interesting question whether the agency of this robot could be seen as an extension of Ishiguro’s agency. The robot certainly represents or symbolizes Ishiguro. So, if he has control over this robot, then perhaps this can be seen as a form of extended agency and extended self-control. While it might seem symbolically problematic if Ishiguro wants to have complete control over the robot Erica that he has created, which looks like a human woman, it might not be problematic in the same way if he wants to have complete control over the robotic replica that he has created of himself. At least it would be different in terms of what it can be taken to symbolize or represent.

Thursday, December 29, 2022

Parents’ Political Ideology Predicts How Their Children Punish

Leshin, R. A., Yudkin, D. A., Van Bavel, J. J., 
Kunkel, L., & Rhodes, M. (2022). 
Psychological Science
https://doi.org/10.1177/09567976221117154

Abstract

From an early age, children are willing to pay a personal cost to punish others for violations that do not affect them directly. Various motivations underlie such “costly punishment”: People may punish to enforce cooperative norms (amplifying punishment of in-groups) or to express anger at perpetrators (amplifying punishment of out-groups). Thus, group-related values and attitudes (e.g., how much one values fairness or feels out-group hostility) likely shape the development of group-related punishment. The present experiments (N = 269, ages 3−8 from across the United States) tested whether children’s punishment varies according to their parents’ political ideology—a possible proxy for the value systems transmitted to children intergenerationally. As hypothesized, parents’ self-reported political ideology predicted variation in the punishment behavior of their children. Specifically, parental conservatism was associated with children’s punishment of out-group members, and parental liberalism was associated with children’s punishment of in-group members. These findings demonstrate how differences in group-related ideologies shape punishment across generations.

Conclusion

The present findings suggest that political ideology shapes punishment across development. Counter to previous findings among adults (King & Maruga, 2009), parental conservatism (vs. liberalism) was not related to increased punishment overall. And counter to previous developmental research on belief transmission (Gelman et al., 2004), our patterns did not strengthen with age. Rather, we found that across development, the link between ideology and punishment hinged on group membership. Parental conservatism was associated with children’s punishment of out-groups, whereas parental liberalism was associated with children’s punishment of in-groups. Our findings add rich insights to our understanding of how costly punishment functions in group contexts and provide new evidence of the powerful transmission of belief systems across generations.

Wednesday, December 28, 2022

Physician-assisted suicide is not protected by Massachusetts Constitution, top state court rules

Chris Van Buskirk
masslive.com
Originally posted 6 Dec 22

The state’s highest court ruled Monday morning that the Massachusetts state constitution does not protect physician-assisted suicide and that laws around manslaughter may prohibit the practice.

The decision affects whether doctors can prescribe lethal amounts of medication to terminally ill patients that would end their life. The plaintiffs, a doctor looking to provide physician-assisted suicide and a patient with an incurable cancer, argued that patients with six months or less to live have a constitutional right to bring about their death on their own terms.

But defendants in the case have said that the decision to legalize or formalize the procedure here in Massachusetts is a question best left to state lawmakers, not the courts. And in an 89-page ruling, Associate Justice Frank Gaziano wrote that the Supreme Judicial Court agreed with that position.

The court, he wrote, recognized the “paramount importance and profound significance of all end-of-life decisions” but that the Massachusetts Declaration of Rights does not reach so far as to protect physician-assisted suicide.

“Our decision today does not diminish the critical nature of these interests, but rather recognizes the limits of our Constitution, and the proper role of the judiciary in a functioning democracy. The desirability and practicality of physician-assisted suicide raises not only weighty philosophical questions about the nature of life and death, but also difficult technical questions about the regulation of the medical field,” Gaziano wrote. “These questions are best left to the democratic process, where their resolution can be informed by robust public debate and thoughtful research by experts in the field.”

Plaintiff Roger Kligler, a retired physician, was diagnosed with stage four metastatic prostate cancer, and in May 2018, a doctor told him that there was a fifty percent chance that he would die within five years.

Kligler, Gaziano wrote in the ruling, had not yet received a six-month prognosis, and his cancer “currently has been contained, and his physician asserts that it would not be surprising if Kligler were alive ten years from now.”

Tuesday, December 27, 2022

Are Illiberal Acts Unethical? APA’s Ethics Code and the Protection of Free Speech

O'Donohue, W., & Fisher, J. E. (2022). 
American Psychologist, 77(8), 875–886.
https://doi.org/10.1037/amp0000995

Abstract

The American Psychological Association’s (APA’s) Ethical Principles of Psychologists and Code of Conduct (American Psychological Association, 2017b; hereinafter referred to as the Ethics Code) does not contain an enforceable standard regarding psychologists’ role in either honoring or protecting the free speech of others, or ensuring that their own free speech is protected, including an important corollary of free speech, the protection of academic freedom. Illiberal acts illegitimately restrict civil liberties. We argue that the ethics of illiberal acts have not been adequately scrutinized in the Ethics Code. Psychologists require free speech to properly enact their roles as scientists as well as professionals who wish to advocate for their clients and students to enhance social justice. This article delineates criteria for what ought to be included in the Ethics Code, argues that ethical issues regarding the protection of free speech rights meet these criteria, and proposes language to be added to the Ethics Code.

Impact Statement

Freedom of speech is a fundamental civil right and currently has come under threat. Psychologists can only perform their duties as scientists, educators, or practitioners if they are not censored or fear censorship. The American Psychological Association’s (APA’s) Ethics Code contains no enforceable ethical standard to protect freedom of speech for psychologists. This article examines the ethics of free speech and argues for amending the APA Ethics Code to more clearly delineate psychologists’ rights and duties regarding free speech. This article argues that such protection is an ethical matter and for specific language to be included in the Ethics Code.

Conclusions

Free speech is central not only within the political sphere but also for the proper functioning of scholars and educators. Unfortunately, the ethics of free speech are not properly explicated in the current version of the American Psychological Association’s Ethics Code and this is particularly concerning given data that indicate a waning appreciation and protection of free speech in a variety of contexts. This article argues for fulsome protection of free speech rights by the inclusion of a clear and well-articulated statement in the Ethics Code of the psychologist’s duties related to free speech. Psychologists are committed to social justice and there can be no social justice without free speech.

Monday, December 26, 2022

Is loneliness in emerging adults increasing over time? A preregistered cross-temporal meta-analysis and systematic review

Buecker, S., Mund, M., Chwastek, S., Sostmann, M.,
& Luhmann, M. (2021). 
Psychological Bulletin, 147(8), 787–805.

Abstract

Judged by the sheer amount of global media coverage, loneliness rates seem to be an increasingly urgent societal concern. From the late 1970s onward, the life experiences of emerging adults have been changing massively due to societal developments such as increased fragmentation of social relationships, greater mobility opportunities, and changes in communication due to technological innovations. These societal developments might have coincided with an increase in loneliness in emerging adults. In the present preregistered cross-temporal meta-analysis, we examined whether loneliness levels in emerging adults have changed over the last 43 years. Our analysis is based on 449 means from 345 studies with 437 independent samples and a total of 124,855 emerging adults who completed the University of California Los Angeles (UCLA) Loneliness Scale between 1976 and 2019. Averaged across all studies, loneliness levels linearly increased with increasing calendar years (β = .224, 95% CI [.138, .309]). This increase corresponds to 0.56 standard deviations on the UCLA Loneliness Scale over the 43-year studied period. Overall, the results imply that loneliness can be a rising concern in emerging adulthood. Although the frequently used term “loneliness epidemic” seems exaggerated, emerging adults should therefore not be overlooked when designing interventions against loneliness.

Impact Statement

Public Significance Statement—The present cross-temporal meta-analysis suggests that loneliness in emerging adults slightly increased over historical time from 1976 until 2019. Consequently, emerging adults should not be overlooked when designing future interventions or public health campaigns against loneliness.

From the Discussion Section

Contrary to the idea that loneliness has sharply increased since smartphones gained market saturation (in about 2012; Twenge et al., 2018), our data showed that loneliness in emerging adults remained relatively stable since 2012 but gradually increased when looking at longer periods (i.e., from 1976 until 2019). It, therefore, seems unlikely that the increased smartphone use has led to increases in emerging adults’ loneliness. However, other societal developments since the late 1970s, such as greater mobility and fragmentation of social networks, may explain increases in emerging adults’ loneliness over historical time. Since our meta-analysis cannot provide information on other age  groups such as children and  adolescents,  the  role  of  smartphone  use  on  loneliness  could  be different in other age groups. 

Sunday, December 25, 2022

Belief in karma is associated with perceived (but not actual) trustworthiness

H.H. Ong, A.M. Evans, et al.
Judgment and Decision Making, Vol. ‍17,
No. ‍2, March 2022, pp. 362-377

Abstract

Believers of karma believe in ethical causation where good and bad outcomes can be traced to past moral and immoral acts. Karmic belief may have important interpersonal consequences. We investigated whether American Christians expect more trustworthiness from (and are more likely to trust) interaction partners who believe in karma. We conducted an incentivized study of the trust game where interaction partners had different beliefs in karma and God. Participants expected more trustworthiness from (and were more likely to trust) karma believers. Expectations did not match actual behavior: karmic belief was not associated with actual trustworthiness. These findings suggest that people may use others' karmic belief as a cue to predict their trustworthiness but would err when doing so.

From the Discussion Section

We asked whether people perceive individuals who believe in karma, compared with those who do not, to be more trustworthy. In an incentivized study of American Christians, we found evidence that this was indeed the case. People expected interaction partners who believed in karma to behave in a more trustworthy manner and trusted these individuals more. Additionally, this tendency did not differ across the perceiver’s belief in karma.

While perceivers expected individuals who believed in karma to be more trustworthy, the individuals’ actual trustworthy behavior did not differ across their belief in karma. This discrepancy indicates that, although participants in our study used karmic belief as a cue when making trustworthiness judgment, it did not track actual trustworthiness. The absence of an association between karmic belief and actual trustworthy behavior among participants in the trustee role may seem to contradict prior research which found that reminders of karma increased generous behavior in dictator games (White et al., 2019; Willard et al., 2020). However, note that our study did not involve any conspicuous reminders of karma – there was only a single question asking if participants believe in karma. Thus, it may be that those who believe in karma would behave in a more trustworthy manner only when the concept is made salient.

Although we had found that karma believers were perceived as more trustworthy, the psychological explanation(s) for this finding remains an open question. One possible explanation is that karma is seen as a source of supernatural justice and that individuals who believe in karma are expected to behave in a more trustworthy manner in order to avoid karmic ]punishment and/or to reap karmic rewards. 


Saturday, December 24, 2022

How Stable are Moral Judgments?

Rehren, P., Sinnott-Armstrong, W.
Rev. Phil.Psych. (2022).
https://doi.org/10.1007/s13164-022-00649-7

Abstract

Psychologists and philosophers often work hand in hand to investigate many aspects of moral cognition. In this paper, we want to highlight one aspect that to date has been relatively neglected: the stability of moral judgment over time. After explaining why philosophers and psychologists should consider stability and then surveying previous research, we will present the results of an original three-wave longitudinal study. We asked participants to make judgments about the same acts in a series of sacrificial dilemmas three times, 6–8 days apart. In addition to investigating the stability of our participants’ ratings over time, we also explored some potential explanations for instability. To end, we will discuss these and other potential psychological sources of moral stability (or instability) and highlight possible philosophical implications of our findings.

From the General Discussion

We have argued that the stability of moral judgments over time is an important feature of moral cognition for philosophers and psychologists to consider. Next, we presented an original empirical study into the stability over 6–8 days of moral judgments about acts in sacrificial dilemmas. Like Helzer et al. (2017, Study 1), we found an overall test-retest correlation of 0.66. Moreover, we observed moderate to large proportions of rating shifts, and small to moderate proportions of rating revisions (M = 14%), rejections (M = 5%) and adoptions (M = 6%)—that is, the participants in question judged p in one wave, but did not judge p in the other wave.

What Explains Instability?

One potential explanation of our results is that they are not a genuine feature of moral judgments about sacrificial dilemmas, but instead are due to measurement error. Measurement error is the difference between the observed and the true value of a variable. So, it may be that most of the rating changes we observed do not mean that many real-life moral judgments about acts in sacrificial dilemmas are (or would be) unstable over short periods of time. Instead, it may be that when people make moral judgments about sacrificial dilemmas in real life, their judgments remain very stable from one week to the next, but our study (perhaps any study) was not able to capture this stability.

To the extent that real-life moral judgment is what moral psychologists and philosophers are interested in, this may suggest a problem with the type of study design used in this and many other papers. If there is enough measurement error, then it may be very difficult to draw firm conclusions about real-life moral judgments from this research. Other researchers have raised related objections. Most forcefully, Bauman et al. (2014) have argued that participants often do not take the judgment tasks used by moral psychologists seriously enough for them to engage with these tasks in anything like the way they would if they came across the same tasks in the real world (also, see, Ryazanov et al. 2018). In our view, moral psychologists would do well to more frequently move their studies outside of the (online) lab and into the real world (e.g., Bollich et al. 2016; Hofmann et al. 2014).

(cut)

Instead, our findings may tell us something about a genuine feature of real-life moral judgment. If so, then a natural question to ask is what makes moral judgments unstable (or stable) over time. In this paper, we have looked at three possible explanations, but we did not find evidence for them. First, because sacrificial dilemmas are in a certain sense designed to be difficult, moral judgments about acts in these scenarios may give rise to much more instability than moral judgments about other scenarios or statements. However, when we compared our test-retest correlations with a sampling of test-retest correlations from instruments involving other moral judgments, sacrificial dilemmas did not stand out. Second, we did not find evidence that moral judgment changes occur because people are more confident in their moral judgments the second time around. Third, Study 1b did not find evidence that rating changes, when they occurred, were often due to changes in light of reasons and reflection. Note that this does not mean that we can rule out any of these potential explanations for unstable moral judgments completely. As we point out below, our research is limited in the extent to which it could test each of these explanations, and so one or more of them may still have been the cause for some proportion of the rating changes we observed.

Friday, December 23, 2022

One thought too few: Why we punish negligence

Sarin, A., & Cushman, F. A. (2022, November 7).
https://doi.org/10.31234/osf.io/mj769

Abstract

Why do we punish negligence? Leading accounts explain away the punishment of negligence as a consequence of other, well-known phenomena: outcome bias, character inference, or the volitional choice not to exercise due care. Although they capture many important cases, these explanations fail to account for others. We argue that, in addition to these phenomena, there is something both fundamental and unique to the punishment of negligence itself: People hold others directly responsible for the basic fact of failing to bring to mind information that would help them to avoid important risks. In other words, we propose that at its heart negligence is a failure of thought. Drawing on the current literature in moral psychology, we suggest that people find it natural to punish such failures, even when they don’t arise from conscious, volitional choice. Then, drawing on the literature on how thoughts come to mind, we argue that punishing a person for forgetting will help them remember in the future. This provides new insight on the structure and function of our tendency to punish negligent actions.

Conclusion

Why do we punish negligence? Psychologists and philosophers have traditionally offered two answers: Outcome bias (a punitive response elicited by the harm caused) and lack of due care (a punitive response elicited by the antecedent intentional choices that made negligence possible). These factors doubtlessly contribute in many cases, and they align well with psychological models that  posit  causation  and  intention  as  the  primary  determinants of punishment (Cushman, 2008; Laurent et al., 2016; Nobes et al., 2009; Shultz et al., 1986). Another potential explanation, rooted in character-based models of moral  judgment (Gray et al., 2012; Malle, 2011; A. Smith, 2017; Sripada, 2016; Uhlmann et al., 2015), is that  negligence speaks to an insufficient concern for others.

These models each attempt to “explain away” negligence as an outgrowth of other, better-understood parts of our moral psychology. We have argued, however, that there is something both fundamental and unique to negligence itself: That people simply hold others responsible for the basic fact of forgetting(or, more broadly, failing to call mind) things that would have made them act better.  In other words, at its heart, negligence is a failure of thought–a failure to make relevant dispositional knowledge occurrent at the right time.

Our challenge, then,  is to explain the design principles behind this mechanism of moral judgment. If we hold people directly responsible for their failures of thought, what purpose does this serve? To address this question, we draw on the literature on how thoughts come to mind.  It offers a model both of how negligence occurs, and why punishing such involuntary forgetting is adaptive. Value determines which  actions, outcomes, and pieces of knowledge come to mind. Specifically, actions come to mind when they have high value, outcomes when they have high absolute value, and other sorts of knowledge structures when they contribute in valuable ways to the task at hand. After an action is chosen and executed, a person receives various kinds of positive and negative feedback –environmental, social, and internal. All kinds of feedback alter value –of actions, outcomes, and other knowledge structures.  Value and feedback therefore form a self-reinforcing loop: value determines what comes to mind and feedback (rewards and punishments) update value.

Thursday, December 22, 2022

In the corner of an Australian lab, a brain in a dish is playing a video game - and it’s getting better

Liam Mannix
Sydney Morning Herald
Originally posted 13 NOV 22

Here is an excerpt:

Artificial intelligence controls an ever-increasing slice of our lives. Smart voice assistants hang on our every word. Our phones leverage machine learning to recognise our face. Our social media lives are controlled by algorithms that surface content to keep us hooked.

These advances are powered by a new generation of AIs built to resemble human brains. But none of these AIs are really intelligent, not in the human sense of the word. They can see the superficial pattern without understanding the underlying concept. Siri can read you the weather but she does not really understand that it’s raining. AIs are good at learning by rote, but struggle to extrapolate: even teenage humans need only a few sessions behind the wheel before the can drive, while Google’s self-driving car still isn’t ready after 32 billion kilometres of practice.

A true ‘general artificial intelligence’ remains out of reach - and, some scientists think, impossible.

Is this evidence human brains can do something special computers never will be able to? If so, the DishBrain opens a new path forward. “The only proof we have of a general intelligence system is done with biological neurons,” says Kagan. “Why would we try to mimic what we could harness?”

He imagines a future part-silicon-part-neuron supercomputer, able to combine the raw processing power of silicon with the built-in learning ability of the human brain.

Others are more sceptical. Human intelligence isn’t special, they argue. Thoughts are just electro-chemical reactions spreading across the brain. Ultimately, everything is physics - we just need to work out the maths.

“If I’m building a jet plane, I don’t need to mimic a bird. It’s really about getting to the mathematical foundations of what’s going on,” says Professor Simon Lucey, director of the Australian Institute for Machine Learning.

Why start the DishBrains on Pong? I ask. Because it’s a game with simple rules that make it ideal for training AI. And, grins Kagan, it was one of the first video game ever coded. A nod to the team’s geek passions - which run through the entire project.

“There’s a whole bunch of sci-fi history behind it. The Matrix is an inspiration,” says Chong. “Not that we’re trying to create a Matrix,” he adds quickly. “What are we but just a goooey soup of neurons in our heads, right?”

Maybe. But the Matrix wasn’t meant as inspiration: it’s a cautionary tale. The humans wired into it existed in a simulated reality while machines stole their bioelectricity. They were slaves.

Is it ethical to build a thinking computer and then restrict its reality to a task to be completed? Even if it is a fun task like Pong?

“The real life correlate of that is people have already created slaves that adore them: they are called dogs,” says Oxford University’s Julian Savulescu.

Thousands of years of selective breeding has turned a wild wolf into an animal that enjoys rounding up sheep, that loves its human master unconditionally.

Wednesday, December 21, 2022

Do You Really Want to Read What Your Doctor Writes About You?

Zoya Qureshi
The Atlantic
Originally posted 15 NOV 22

You may not be aware of this, but you can read everything that your doctor writes about you. Go to your patient portal online, click around until you land on notes from your past visits, and read away. This is a recent development, and a big one. Previously, you always had the right to request your medical record from your care providers—an often expensive and sometimes fruitless process—but in April 2021, a new federal rule went into effect, mandating that patients have the legal right to freely and electronically access most kinds of notes written about them by their doctors.

If you’ve never heard of “open notes,” as this new law is informally called, you’re not the only one. Doctors say that the majority of their patients have no clue. (This certainly has been the case for all of the friends and family I’ve asked.) If you do know about the law, you likely know a lot about it. That’s typically because you’re a doctor—one who now has to navigate a new era of transparency in medicine—or you’re someone who knows a doctor, or you’re a patient who has become intricately familiar with this country’s health system for one reason or another.

When open notes went into effect, the change was lauded by advocates as part of a greater push toward patient autonomy and away from medical gatekeeping. Previously, hospitals could charge up to hundreds of dollars to release records, if they released them at all. Many doctors, meanwhile, have been far from thrilled about open notes. They’ve argued that this rule will introduce more challenges than benefits for both patients and themselves. At worst, some have fretted, the law will damage people’s trust of doctors and make everyone’s lives worse.

A year and a half in, however, open notes don’t seem to have done too much of anything. So far, they have neither revolutionized patient care nor sunk America’s medical establishment. Instead, doctors say, open notes have barely shifted the clinical experience at all. Few individual practitioners have been advertising the change, and few patients are seeking it out on their own. We’ve been left with a partially implemented system and a big unresolved question: How much, really, should you want to read what your doctor is writing about you?

(cut)

Open notes are only part of this conversation. The new law also requires that test results be made immediately available to patients, meaning that patients might see their health information before their physician does. Although this is fine for the majority of tests, problems arise when results are harbingers of more complex, or just bad, news. Doctors I spoke with shared that some of their patients have suffered trauma from learning about their melanoma or pancreatic cancer or their child’s leukemia from an electronic message in the middle of the night, with no doctor to call and talk through the seriousness of that result with. This was the case for Tara Daniels, a digital-marketing consultant who lives near Boston. She’s had leukemia three times, and learned about the third via a late-night notification from her patient portal. Daniels appreciates the convenience of open notes, which help her keep track of her interactions with various doctors. But, she told me, when it comes to instant results, “I still hold a lot of resentment over the fact that I found out from test results, that I had to figure it out myself, before my doctor was able to tell me.”

Tuesday, December 20, 2022

Doesn't everybody jaywalk? On codified rules that are seldom followed and selectively punished

Wylie, J., & Gantman, A.
Cognition, Volume 231, February 2023, 105323

Abstract

Rules are meant to apply equally to all within their jurisdiction. However, some rules are frequently broken without consequence for most. These rules are only occasionally enforced, often at the discretion of a third-party observer. We propose that these rules—whose violations are frequent, and enforcement is rare—constitute a unique subclass of explicitly codified rules, which we call ‘phantom rules’ (e.g., proscribing jaywalking). Their apparent punishability is ambiguous and particularly susceptible to third-party motives. Across six experiments, (N = 1,440) we validated the existence of phantom rules and found evidence for their motivated enforcement. First, people played a modified Dictator Game with a novel frequently broken and rarely enforced rule (i.e., a phantom rule). People enforced this rule more often when the “dictator” was selfish (vs. fair) even though the rule only proscribed fractional offers (not selfishness). Then we turned to third person judgments of the U.S. legal system. We found these violations are recognizable to participants as both illegal and commonplace (Experiment 2), differentiable from violations of prototypical laws (Experiments 3) and enforced in a motivated way (Experiments 4a and 4b). Phantom rule violations (but not prototypical legal violations) are seen as more justifiably punished when the rule violator has also violated a social norm (vs. rule violation alone)—unless the motivation to punish has been satiated (Experiment 5). Phantom rules are frequently broken, codified rules. Consequently, their apparent punishability is ambiguous, and their enforcement is particularly susceptible to third party motives.

General Discussion

In this paper, we identified a subset of rules, which are explicitly codified (e.g., in professional tennis, in an economic game, by the U.S. legal system), frequently violated, and rarely enforced. As a result, their apparent punishability is particularly ambiguous and subject to motivation. These rules show us that codified rules, which are meant to apply equally to all, can be used to sanction behaviors outside of their jurisdiction. We named this subclass of rules phantom rules and found evidence that people enforce them according to their desire to punish a different behavior (i.e., a social norm violation), recognize them in the U.S. legal system, and employ motivated reasoning to determine their punishability. We hypothesized and found, across behavioral and survey experiments, that phantom rules—rules where the descriptive norms of enforcement are low—seem enforceable, punishable, and legitimate only when one has an external active motivation to punish. Indeed, we found that phantom rules were judged to be more justifiably enforced and more morally wrong to violate when the person who broke the rule had also violated a social norm—unless they were also punished for that social norm violation. Together, we take this as evidence of the existence of phantom rules and the malleability of their apparent punishability via active (vs. satiated) punishment motivation.

The ambiguity of phantom rule enforcement makes it possible for them to serve a hidden function; they can be used to punish behavior outside of the purview of the official rules. Phantom rule violations are technically wrong, but on average, seen as less morally wrong.This means, for the most part, that people are unlikely to feel strongly when they see these rules violated, and indeed, people frequently violate phantom rules without consequence. This pattern fits well with previous work in experimental philosophy that shows that motivations can affect how we reason about what constitutes breaking a rule in the first place. For example, when rule breaking occurs blameless (e.g., unintentionally), people are less likely to say a rule was violated at all and look for reasons to excuse their behavior(Turri, 2019; Turri & Blouw, 2015). Indeed, our findings mirror this pattern. People find a reason to punish phantom rule violations only when people are particularly or dispositionally motivated to punish.

Monday, December 19, 2022

Socially evaluative contexts facilitate mentalizing

Woo, B. M., Tan, E., Yuen, F. L, & Hamlin, J. K.
Trends in Cognitive Sciences, Month 2022, 
Vol. xx, No. xx

Abstract

Our ability to understand others’ minds stands at the foundation of human learning, communication, cooperation, and social life more broadly. Although humans’ ability to mentalize has been well-studied throughout the cognitive sciences, little attention has been paid to whether and how mentalizing differs across contexts. Classic developmental studies have examined mentalizing within minimally social contexts, in which a single agent seeks a neutral inanimate object. Such object-directed acts may be common, but they are typically consequential only to the object-seeking agent themselves. Here, we review a host of indirect evidence suggesting that contexts providing the opportunity to evaluate prospective social partners may facilitate mentalizing across development. Our article calls on cognitive scientists to study mentalizing in contexts where it counts.

Highlights

Cognitive scientists have long studied the origins of our ability to mentalize. Remarkably little is known, however, about whether there are particular contexts where humans are more likely to mentalize.
We propose that mentalizing is facilitated in contexts where others’ actions shed light on their status as a good or bad social partner. Mentalizing within socially evaluative contexts supports effective partner choice.

Our proposal is based on three lines of evidence. First, infants leverage their understanding of others’ mental states to evaluate others’ social actions. Second, infants, children, and adults demonstrate enhanced mentalizing within socially evaluative contexts. Third, infants, children, and adults are especially likely to mentalize when agents cause negative outcomes.  Direct tests of this proposal will contribute to a more comprehensive understanding of human mentalizing.

Concluding remarks

Mental state reasoning is not only used for social evaluation, but may be facilitated, and even overactivated, when humans engage in social evaluation. Human infants begin mentalizing in socially evaluative contexts as soon as they do so in nonevaluative contexts, if not earlier, and mental state representations across human development may be stronger in socially evaluative contexts, particularly when there are negative outcomes. This opinion article supports the possibility that mentalizing is privileged within socially evaluative contexts, perhaps due to its key role in facilitating the selection of appropriate cooperative partners. Effective partner choice may provide a strong foundation upon which humans’ intensely interdependent and cooperative nature can flourish.

The work cited herein is highly suggestive, and more work is clearly needed to further explore this possibility (see Outstanding questions). We have mostly reviewed and compared data across experiments that have studied mentalizing in either socially evaluative or nonevaluative contexts, pulling from a wide range of ages and methods; to our knowledge, no research has directly compared both socially evaluative and nonevaluative contexts within the same experiment.  Experiments using stringent minimal contrast designs would provide stronger tests of our central claims. In addition to such experiments, in the same way that meta-analyses have explored other predictors of mentalizing, we call on future researchers to conduct meta-analyses of findings that come from socially evaluative and nonevaluative contexts. We look forward to such research, which together will move us towards a more comprehensive understanding of humans’ early mentalizing.

Sunday, December 18, 2022

Beliefs about humanity, not higher power, predict extraordinary altruism

Amormino, P., O'Connell, et al.
Journal of Research in Personality
Volume 101, December 2022, 104313

Abstract

Using a rare sample of altruistic kidney donors (n = 56, each of whom had donated a kidney to a stranger) and demographically similar controls (n = 75), we investigated how beliefs about human nature correspond to extraordinary altruism. Extraordinary altruists were less likely than controls to believe that humans can be truly evil. Results persisted after controlling for trait empathy and religiosity. Belief in pure good was not associated with extraordinary altruism. We found no differences in the religiosity and spirituality of extraordinary altruists compared to controls. Findings suggest that highly altruistic individuals believe that others deserve help regardless of their potential moral shortcomings. Results provide preliminary evidence that lower levels of cynicism motivate costly, non-normative altruism toward strangers.

Discussion

We found for the first time a significant negative relationship between real-world acts of altruism toward strangers and the belief that humans can be purely evil. Specifically, our results showed that adults who have engaged in costly altruism toward strangers are distinguished from typical adults by their reduced tendency to believe that humans can be purely evil. By contrast, altruists were no more likely than controls to believe that humans can be purely good. These patterns could not be accounted for by demographic differences, differences in self reported empathy, or differences in religious or spiritual beliefs.

This finding could be viewed as paradoxical, in that extraordinary altruists are themselves often viewed as the epitome of pure good—even described as “saints” in the scholarly literature (Henderson et al., 2003).
But our findings suggest that the willingness to provide costly aid for anonymous strangers may not require believing that others are purely \good (i.e., that morally infallible people exist), but rather believing that there is at least a little bit of good in everyone. Thus, extraordinary altruists are not overly optimistic about the moral goodness of other people but are willing to act altruistically towards morally imperfect people anyway. Although the concept of “pure evil” is conceptually linked to spiritual phenomena, we did not find any evidence directly linking altruists’ beliefs in evil to spirituality or religion.

 (cut)

Conclusions

Because altruistic kidney donations to anonymous strangers satisfy the most stringent definitions of costly altruism (Clavien & Chapuisat, 2013), the study of these altruists can provide valuable insight into the nature of altruism, much as studying other rare, ecologically valid populations has yielded insights into psychological phenomena such asmemory (LePort et al., 2012) and face processing (Russell, Duchaine, &
Nakayama, 2009). Results show that altruists report lower belief in pure evil, which extends previous literature showing that higher levels of generalized trust and lower levels of cynicism and are associated with everyday prosocial behavior (Turner & Valentine, 2001). Our findings provide preliminary evidence that beliefs about the morality of people in general, and the goodness (or rather, lack of badness) of other humans may help motivate real-world costly altruistic acts toward strangers.

Saturday, December 17, 2022

Interaction between games give rise to the evolution of moral norms of cooperation

Salahshour M (2022)
PLoS Comput Biol 18(9): e1010429.
https://doi.org/10.1371/journal.pcbi.1010429

Abstract

In many biological populations, such as human groups, individuals face a complex strategic setting, where they need to make strategic decisions over a diverse set of issues and their behavior in one strategic context can affect their decisions in another. This raises the question of how the interaction between different strategic contexts affects individuals’ strategic choices and social norms? To address this question, I introduce a framework where individuals play two games with different structures and decide upon their strategy in a second game based on their knowledge of their opponent’s strategy in the first game. I consider both multistage games, where the same opponents play the two games consecutively, and reputation-based model, where individuals play their two games with different opponents but receive information about their opponent’s strategy. By considering a case where the first game is a social dilemma, I show that when the second game is a coordination or anti-coordination game, the Nash equilibrium of the coupled game can be decomposed into two classes, a defective equilibrium which is composed of two simple equilibrium of the two games, and a cooperative equilibrium, in which coupling between the two games emerge and sustain cooperation in the social dilemma. For the existence of the cooperative equilibrium, the cost of cooperation should be smaller than a value determined by the structure of the second game. Investigation of the evolutionary dynamics shows that a cooperative fixed point exists when the second game belongs to coordination or anti-coordination class in a mixed population. However, the basin of attraction of the cooperative fixed point is much smaller for the coordination class, and this fixed point disappears in a structured population. When the second game belongs to the anti-coordination class, the system possesses a spontaneous symmetry-breaking phase transition above which the symmetry between cooperation and defection breaks. A set of cooperation supporting moral norms emerges according to which cooperation stands out as a valuable trait. Notably, the moral system also brings a more efficient allocation of resources in the second game. This observation suggests a moral system has two different roles: Promotion of cooperation, which is against individuals’ self-interest but beneficial for the population, and promotion of organization and order, which is at both the population’s and the individual’s self-interest. Interestingly, the latter acts like a Trojan horse: Once established out of individuals’ self-interest, it brings the former with itself. Importantly, the fact that the evolution of moral norms depends only on the cost of cooperation and is independent of the benefit of cooperation implies that moral norms can be harmful and incur a pure collective cost, yet they are just as effective in promoting order and organization. Finally, the model predicts that recognition noise can have a surprisingly positive effect on the evolution of moral norms and facilitates cooperation in the Snow Drift game in structured populations.

Author summary

How do moral norms spontaneously evolve in the presence of selfish incentives? An answer to this question is provided by the observation that moral systems have two distinct functions: Besides encouraging self-sacrificing cooperation, they also bring organization and order into the societies. In contrast to the former, which is costly for the individuals but beneficial for the group, the latter is beneficial for both the group and the individuals. A simple evolutionary model suggests this latter aspect is what makes a moral system evolve based on the individuals’ self-interest. However, a moral system behaves like a Trojan horse: Once established out of the individuals’ self-interest to promote order and organization, it also brings self-sacrificing cooperation.

Friday, December 16, 2022

How Bullying Manifests at Work — and How to Stop It

Ludmila N. Praslova, Ron Carucci, & Caroline Stokes
Harvard Business Review
Originally posted 4 NOV 22

While the organizational costs of incivility and toxicity are well documented, bullying at work is still a problem. An estimated 48.6 million Americans, or about 30% of the workforce, are bullied at work. In India, that percentage is reported to be as high as 46% or even 55%. In Germany, it’s a lower but non-negligible 17%. Yet bullying often receives little attention or effective action.

To maximize workplace health and well-being, it’s critical to create workplaces where all employees — regardless of their position — are safe. Systemic, organizational-level approaches can help prevent the harms associated with different types of bullying.

The term workplace bullying describes a wide range of behaviors, and this complexity makes addressing it difficult and often ineffective. Here, we’ll discuss the different types of bullying, the myths that prevent leaders from addressing it, and how organizations can effectively intervene and create a safer workplace.

The Different Types of Bullying

To develop more comprehensive systems of bullying prevention and support employees’ psychological well-being, leaders first need to be aware of the different types of bullying and how they show up. We’ve identified 15 different features of bullying, based on standard typologies of aggression, data from the Workplace Bullying Institute (WBI), and Ludmila’s 25+ years of research and practice focused on addressing workplace aggression, discrimination, and incivility to create healthy organizational cultures.

These 15 features can be mapped to some of the common archetypes of bullies. Take the “Screamer,” who is associated with yelling and fist-banging or the quieter but equally dangerous “Schemer” who uses Machiavellian plotting, gaslighting, and smear campaigns to strip others of resources or push them out. The Schemer doesn’t necessarily have a position of legitimate power and can present as a smiling and eager-to-help colleague or even an innocent-looking intern. While hostile motivation and overt tactics align with the Screamer bully archetype and instrumental, indirect, and covert bullying is typical of the Schemer, a bully can have multiple motives and use multiple tactics — consciously or unconsciously.

Caroline mediated a situation that illustrates both conscious and unconscious dynamics. At the reception to celebrate Ewa’s* national-level achievement award, Harper, her coworker, spent most of the time talking about her own accomplishments, then took the stage to congratulate herself on mentoring Ewa and letting her take “ownership” of their collective work. But there had been no mentorship or collective work. After overtly and directly putting Ewa down and (perhaps unconsciously) attempting to elevate herself, Harper didn’t stop. She “accidentally” removed Ewa from crucial information distribution lists — an act of indirect, covert sabotage.  

In another example, Ludmila encountered a mixed-motive, mixed-tactic situation. Charles, a manager with a strong xenophobic sentiment, regularly berated Noor, a work visa holder, behind closed doors — an act of hostile and direct bullying. Motivated by a desire to take over the high-stakes, high-visibility projects Noor had built, Charles also engaged in indirect, covert bullying by falsifying performance records to make a case for her dismissal.

Thursday, December 15, 2022

Dozens of telehealth startups sent sensitive health information to big tech companies

Katie Palmer with
Todd Feathers & Simon Fondrie-Teitler 
STAT NEWS
Originally posted 13 DEC 22

Here is an excerpt:

Health privacy experts and former regulators said sharing such sensitive medical information with the world’s largest advertising platforms threatens patient privacy and trust and could run afoul of unfair business practices laws. They also emphasized that privacy regulations like the Health Insurance Portability and Accountability Act (HIPAA) were not built for telehealth. That leaves “ethical and moral gray areas” that allow for the legal sharing of health-related data, said Andrew Mahler, a former investigator at the U.S. Department of Health and Human Services’ Office for Civil Rights.

“I thought I was at this point hard to shock,” said Ari Friedman, an emergency medicine physician at the University of Pennsylvania who researches digital health privacy. “And I find this particularly shocking.”

In October and November, STAT and The Markup signed up for accounts and completed onboarding forms on 50 telehealth sites using a fictional identity with dummy email and social media accounts. To determine what data was being shared by the telehealth sites as users completed their forms, reporters examined the network traffic between trackers using Chrome DevTools, a tool built into Google’s Chrome browser.

On Workit’s site, for example, STAT and The Markup found that a piece of code Meta calls a pixel sent responses about self-harm, drug and alcohol use, and personal information — including first name, email address, and phone number — to Facebook.

The investigation found trackers collecting information on websites that sell everything from addiction treatments and antidepressants to pills for weight loss and migraines. Despite efforts to trace the data using the tech companies’ own transparency tools, STAT and The Markup couldn’t independently confirm how or whether Meta and the other tech companies used the data they collected.

After STAT and The Markup shared detailed findings with all 50 companies, Workit said it had changed its use of trackers. When reporters tested the website again on Dec. 7, they found no evidence of tech platform trackers during the company’s intake or checkout process.

“Workit Health takes the privacy of our members seriously,” Kali Lux, a spokesperson for the company, wrote in an email. “Out of an abundance of caution, we elected to adjust the usage of a number of pixels for now as we continue to evaluate the issue.”

Wednesday, December 14, 2022

The motivation of mission statements: How regulatory mode influences workplace discrimination

Kanze, D., Conley, M. A., & Higgins, E. T. (2021).
Organizational Behavior and Human 
Decision Processes, 166, 84–103.
https://doi.org/10.1016/j.obhdp.2019.04.002

Abstract

Despite concerted efforts to enforce ethical standards, transgressions continue to plague US corporations. This paper investigates whether the way in which an organization pursues its goals can influence ethical violations, manifested as involvement in discrimination. We test this hypothesis among franchises, which employ a considerable amount of low-income workers adversely affected by discrimination. Drawing upon Regulatory Mode Theory, we perform a linguistic analysis of franchise mission statements to determine their degree of locomotion and assessment language. EEOC archival data for the past decade reveals that regulatory mode predicts franchise involvement in discrimination. Discriminatory behavior is associated with franchises whose mission statements motivate employees to embrace urgent action (locomotion mode) over thoughtful consideration (assessment mode). Two experiments demonstrate that participants exposed to high locomotion mission statements tend to disregard ethical standards due to their need for expediency, making significantly more discriminatory managerial decisions than those exposed to high assessment mission statements.

Highlights

• We examine the influence of motivational messaging on workplace discrimination.

• The regulatory mode of mission statements predicts discrimination activity.

• Discrimination is associated with motivational messaging high in locomotion mode.

• This risk can be counteracted with language that is high in assessment mode.

• Consideration of ethical standards mediates this effect due to need for expediency.

• We introduce a regulatory mode dictionary to help evaluate motivational language.

From the General Discussion

Regulatory mode and unethical behavior

These studies contribute to the literature that resides at the crossroads of regulatory mode and ethics, informing our understanding of the motivational forces behind discrimination by highlighting the role of locomotion and assessment concerns. We apply regulatory mode theory to investigate the organizational context in which individuals engage in an important, unambiguous, and generalizable facet of unethical behavior: violations of corporate ethical standards known as workplace nondiscrimination policies. Going on to examine the interplay between perceived expediency and attention, we extend scholarly research related to cognitive influences on the perpetrators (Dovidio et al., 2002, Lai and Babcock, 2013) and the companies in which they are employed (Cortina, 2008). Importantly, our work sheds light on the conditions under which employees attend to standards deemed key to ethical conduct (Lau, 2010).

By demonstrating the unintended consequences of leadership decisions embodied in corporate mission statements, our work complements predictive research on discrimination that has primarily been devoted to the effectiveness of intended policies and programs (Castilla, 2015, McKay et al., 2011; see Dipboye & Colella, 2005 and Green, 2003 for several exceptions). The presence of EEOC violations in the face of corporate nondiscrimination policies extends the rich tradition of bounded ethicality research on unintended choices beyond the individual to conceptualize behavior at the organizational level (Chugh et al., 2005). Likewise, we widen the breadth of regulatory mode theory’s applicability, establishing the mechanism by which locomotion and assessment concerns can produce significant organizational-level effects through individual decision making (Bélanger et al., 2015).

Exploring the trade-offs inherent in contrasting modes of goal pursuit, we also enrich the growing literature on the “dark side” of goals (Ordóñez et al., 2009, Welsh and Ordóñez, 2014). In doing so, this work likewise informs a more nuanced understanding of locomotion mode. Our theoretical prediction and empirical support for the pernicious effects of locomotion mode lie in stark contrast to the preponderance of regulatory mode literature. Past work has documented a variety of otherwise positive outcomes—involving transformational leadership, intrinsic task motivation, multi-tasking, time-management, and well-being—associated with locomotion (Amato et al., 2014, Benjamin and Flynn, 2006, Di Santo et al., 2018, Pierro et al., 2013, Pierro et al., 2006).

Our EEOC archival study presented empirical evidence linking regulatory mode to actual managerial transgressions taking place in corporations spanning a wide range of industries that operate throughout the entire United States. These real-world cases of discrimination then served as the decision-making tasks in controlled experiments that manipulated locomotion and assessment of mission statements. Employing a combination of archival and experimental methodologies, our work represents a marriage of external and internal validity that enhances both theory and practice in this domain.23 Ultimately, linguistic applications that modify corporate mission statements for goal pursuit language can answer a recent call “to move beyond a descriptive framework and focus on finding empirically testable strategies to mitigate unethical behavior” (Sezer et al., 2015, p. 78).

Tuesday, December 13, 2022

The Trajectory of Truth: A Longitudinal Study of the Illusory Truth Effect

Henderson, E. L., Simons, D. J., & Barr, D. J.
(2021). Journal of Cognition, 4(1), 29.
DOI: http://doi.org/10.5334/joc.161

Abstract

Repeated statements are rated as subjectively truer than comparable new statements, even though repetition alone provides no new, probative information (the illusory truth effect). Contrary to some theoretical predictions, the illusory truth effect seems to be similar in magnitude for repetitions occurring after minutes or weeks. This Registered Report describes a longitudinal investigation of the illusory truth effect (n = 608, n = 567 analysed) in which we systematically manipulated intersession interval (immediately, one day, one week, and one month) in order to test whether the illusory truth effect is immune to time. Both our hypotheses were supported: We observed an illusory truth effect at all four intervals (overall effect: χ2(1) = 169.91; M(repeated) = 4.52, M(new) = 4.14; H1), with the effect diminishing as delay increased (H2). False information repeated over short timescales might have a greater effect on truth judgements than repetitions over longer timescales. Researchers should consider the implications of the choice of intersession interval when designing future illusory truth effect research.

Discussion

We used a repeated measures, longitudinal design to investigate the trajectory of the illusory truth effect over time: immediately, one day, one week, and one month. Both of our hypotheses were supported: We observed a main effect of the illusory truth effect when averaging across all four delay conditions (H1). The illusory truth effect was present at all four intervals, but the size of the effect diminished as the interval duration increased (H2). The repeated-minus-new difference was largest when tested immediately (0.67) and shrank after one day (0.39), one week (0.27), and one month (0.14). This reduction in the illusory truth effect over time is inconsistent with an earlier meta-analysis that found no relationship between the size of the effect and intersession interval across studies (Dechêne et al., 2010), but it is consistent with one between-subjects study showing a smaller effect after one week than after a few minutes (Silva et al., 2017, Experiment 1).

The reduced effect after a delay is consistent with the recognition, familiarity, and processing fluency explanations of the illusory truth effect. All three explanations predict larger effects for recently repeated items and smaller effects as feelings of recognition, familiarity or fluency fade with time.

A caveat to the processing fluency account occurs when the source of fluency is obvious (e.g., when participants recognise that statements have been recently repeated). In such cases, participants might not use processing fluency to make their judgments of truth, thereby eliminating the effect (Alter & Oppenheimer, 2009; Nadarevic & Erdfelder, 2014; Oppenheimer, 2004). Our results challenge this fluency discounting explanation because the size of the illusory truth effect was greatest when tested immediately, when participants should be most aware that some statements had been repeated. Similarly, the source disassociation hypothesis predicts that the illusory truth effect should increase with time as people forget that they saw the statements during the experiment, remembering only the semantic content and attributing it to a source outside the experiment. Here we find the opposite.

Monday, December 12, 2022

Wealth redistribution promotes happiness

R. J. Dwyer and E. W. Dunn
PNAS, 119 (46) e2211123119
November 7, 2022

Significance

We took advantage of a unique experiment, in which anonymous donors gave US$10,000 to each of 200 recipients in seven countries. By comparing cash recipients with a control group that did not receive money, this preregistered experiment provides causal evidence that cash transfers substantially increase happiness across a diverse global sample. These gains were greatest for recipients who had the least: Those in lower-income countries gained three times more happiness than those in higher-income countries. Our data provide the clearest evidence to date that private citizens can improve net global happiness through voluntary redistribution to those with less.

Abstract

How much happiness could be gained if the world’s wealth were distributed more equally? Despite decades of research investigating the relationship between money and happiness, no experimental work has quantified this effect for people across the global economic spectrum. We estimated the total gain in happiness generated when a pair of high-net-worth donors redistributed US$2 million of their wealth in $10,000 cash transfers to 200 people. Our preregistered analyses offer causal evidence that cash transfers substantially increase happiness among economically diverse individuals around the world. Recipients in lower-income countries exhibited happiness gains three times larger than those in higher-income countries. Still, the cash provided detectable benefits for people with household incomes up to $123,000.

From the Discussion section

This study provides causal evidence that cash transfers substantially increase happiness across a diverse sample spanning the global socioeconomic spectrum. By redistributing their wealth, two donors generated substantial happiness gains for others. These gains were greatest for recipients who had the least: Those in lower-income countries gained three times more happiness than those in higher-income countries, and those making $10k a year gained twice as much happiness as those making $100k. Still, the cash provided detectable benefits for people with household incomes up to $123k. Given that 99% of individuals earn less than this amount, these findings suggest that cash transfers could benefit the vast majority of the world’s population.

Of course, some caution is necessary in interpreting these findings, given that the study did not include nationally representative samples and focused on a limited time period. Although all participants were English-speaking Twitter users who were relatively liberal-leaning and well-educated, this sample was more economically diverse than any previous cash-transfer studies, enabling us to estimate the happiness benefits across a wide range of incomes. That said, our finding that cash transfers improved SWB—but less so for people with higher incomes—is consistent with the law of diminishing marginal utility in economics and with large-scale studies documenting the concave relationship between income and self-reported happiness.

Sunday, December 11, 2022

Strategic Behavior with Tight, Loose, and Polarized Norms

Dimant, E., Gelfand, M. J., Hochleitner, A., 
& Sonderegger, S. (2022).
SSRN.com

Abstract

Descriptive norms – the behavior of other individuals in one’s reference group – play a key role in shaping individual decisions. When characterizing the behavior of others, a standard approach in the literature is to focus on average behavior. In this paper, we argue both theoretically and empirically that not only averages, but the shape of the whole distribution of behavior can play a crucial role in how people react to descriptive norms. Using a representative sample of the U.S. population, we experimentally investigate how individuals react to strategic environments that are characterized by different distributions of behavior, focusing on the distinction between tight (i.e., characterized by low behavioral variance), loose (i.e., characterized by high behavioral variance), and polarized (i.e., characterized by u-shaped behavior) environments. We find that individuals indeed strongly respond to differences in the variance and shape of the descriptive norm they are facing: loose norms generate greater behavioral variance and polarization generates polarized responses. In polarized environments, most individuals prefer extreme actions that expose them to considerable strategic risk to intermediate actions that would minimize such risk. Importantly, we also find that, in polarized and loose environments, personal traits and values play a larger role in determining actual behavior. This provides important insights into how individuals navigate environments that contain strategic uncertainty.

Conclusion

In this study, we investigate how individuals respond to differences in the observed distribution of others’ behavior. In particular, we test how different distributions of cooperative behavior affect an individual’s own willingness to cooperate. We first develop a theoretical framework that is based on the assumption that individuals are conditional cooperators and interpret differences in observed distribution as a shift in strategic uncertainty. We then test our framework empirically in the context of a PGG. To do so, we measure behavior in the PGG both before and after participants receive information about the distribution from which a co-players contribution is drawn. We thereby vary both the mean (high/low) and the variance/shape (high variance/ low variance/ u-shaped) of the observed distribution.

Our results confirm previous research showing that information about average behavior has an important effect on subsequent decisions. Individuals contribute significantly more in high mean conditions than in low mean conditions. However, the mean is not the only important feature of the distribution. In line with our theoretical framework, we find that looser environments generate a larger variance in individual responses compared to tighter environments.  In other words, “tight breeds tight” and “loose breeds loose”. Moreover, we find that, when confronted with a polarized (U-shaped) distribution, participants’ responses are polarized as well. A possible interpretation of these results is that people have heterogeneous reactions to situations characterized by high strategic uncertainty, while they react rather similarly when strategic uncertainty is low. Finally, we find that personal values have a higher predictive power for contribution decisions in loose and polarized compared to tight environments. This suggests that an individual’s reaction to strategic uncertainty may be mediated by their personal values.  This in turn has practical implications for behavioral change interventions. For example, when intervening in contexts with loose or polarized empirical norms, it may be more fruitful to focus on personal values, whereas when intervening in contexts with tight empirical norms, it may be more fruitful to focus on the behaviors of others.

Overall, we show that when studying empirical norms it is crucial to not only consider the average behavior, but the whole distribution. Doing so provides substantial analytical richness that can form the basis for a better understanding of the different behavioral patterns observed across societies.

Saturday, December 10, 2022

From virility to virtue: the psychology of apology in honor cultures

Lin, Y., Caluori, N., Öztürk, E. B., & Gelfand, M. J. (2022).  PNAS of the United States of America, 119(41), e2210324119.

Abstract

In honor cultures, relatively minor disputes can escalate, making numerous forms of aggression widespread. We find evidence that honor cultures' focus on virility impedes a key conflict de-escalation strategy-apology-that can be successfully promoted through a shift in mindset. Across five studies using mixed methods (text analysis of congressional speeches, a cross-cultural comparison, surveys, and experiments), people from honor societies (e.g., Turkey and US honor states), people who endorse honor values, and people who imagine living in a society with strong honor norms are less willing to apologize for their transgressions (studies 1-4). This apology reluctance is driven by concerns about reputation in honor cultures. Notably, honor is achieved not only by upholding strength and reputation (virility) but also through moral integrity (virtue). The dual focus of honor suggests a potential mechanism for promoting apologies: shifting the focus of honor from reputation to moral integrity. Indeed, we find that such a shift led people in honor cultures to perceive apologizing more positively and apologize more (study 5). By identifying a barrier to apologizing in honor cultures and illustrating ways to overcome it, our research provides insights for deploying culturally intelligent conflict-management strategies in such contexts.

Significance

Conflict is widespread and can easily escalate in regions where honor is a central value. We find evidence that honor cultures’ focus on virility impedes a key conflict deescalation strategy—apology—that can be
successfully promoted through a shift in mindset. Building on the conceptualization of honor as both virility and virtue, we show that virility concerns of maintaining one’s reputation underlie the reluctance to
apologize. Conversely, shifting the focus of honor to virtue concerns promotes apologizing. Our findings suggest that honor is a double-edged sword with the potential to both escalate and de-escalate conflicts.

Discussion

In honor cultures, relatively minor disputes can escalate, making certain forms of aggression widespread. Yet, there is surprisingly little research on how to manage conflicts and disputes in these settings. In the present research, we examine the role of honor culture in apology, an act that is critical to conflict de-escalation and reconciliation. Across five studies, we show that the culture of honor impedes apology. People from honor societies (e.g., Turkey and US honor states) and people who endorse honor values are less willing to apologize for their transgressions. Our final experiment provides insight into ways to promote apologizing when honor is at stake.

When the focus of honor concerns is on moral integrity, people see apologizing more positively and apologize more. Our results suggest that people are unwilling to apologize in part because they are concerned that apologizing undermines a core focal concern in these cultures, namely reputation, which may lower their social standing. In addition, we found some evidence that people are less willing to apologize because they consider apologies to be less effective at resolving conflict and repairing relationships. The unwillingness to apologize and the inclination to retaliate after being wronged (36) may create a vicious cycle that further fuels conflicts in honor cultures.