Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Social Influence. Show all posts
Showing posts with label Social Influence. Show all posts

Sunday, January 7, 2024

The power of social influence: A replication and extension of the Asch experiment

Franzen A, Mader S (2023)
PLoS ONE 18(11): e0294325.


In this paper, we pursue four goals: First, we replicate the original Asch experiment with five confederates and one naïve subject in each group (N = 210). Second, in a randomized trial we incentivize the decisions in the line experiment and demonstrate that monetary incentives lower the error rate, but that social influence is still at work. Third, we confront subjects with different political statements and show that the power of social influence can be generalized to matters of political opinion. Finally, we investigate whether intelligence, self-esteem, the need for social approval, and the Big Five are related to the susceptibility to provide conforming answers. We find an error rate of 33% for the standard length-of-line experiment which replicates the original findings by Asch (1951, 1955, 1956). Furthermore, in the incentivized condition the error rate decreases to 25%. For political opinions we find a conformity rate of 38%. However, besides openness, none of the investigated personality traits are convincingly related to the susceptibility of group pressure.

My summary:

This research aimed to replicate and extend the classic Asch conformity experiment, investigating the extent to which individuals conform to group pressure in a line-judging task. The study involved 210 participants divided into groups, with one naive participant and five confederates who provided deliberately incorrect answers. Replicating the original findings, the researchers observed an average error rate of 33%, demonstrating the enduring power of social influence in shaping individual judgments.

Furthering the investigation, the study explored the impact of monetary incentives on conformity. The researchers found that offering rewards for independent judgments reduced the error rate, suggesting that individuals are more likely to resist social pressure when motivated by personal gain. However, the study still observed a significant level of conformity even with incentives, indicating that social influence remains a powerful force even when competing with personal interests.

Sunday, July 30, 2023

Social influences in the digital era: When do people conform more to a human being or an artificial intelligence?

Riva, P., Aureli, N., & Silvestrini, F. 
(2022). Acta Psychologica, 229, 103681. 


The spread of artificial intelligence (AI) technologies in ever-widening domains (e.g., virtual assistants) increases the chances of daily interactions between humans and AI. But can non-human agents influence human beings and perhaps even surpass the power of the influence of another human being? This research investigated whether people faced with different tasks (objective vs. subjective) could be more influenced by the information provided by another human being or an AI. We expected greater AI (vs. other humans) influence in objective tasks (i.e., based on a count and only one possible correct answer). By contrast, we expected greater human (vs. AI) influence in subjective tasks (based on attributing meaning to evocative images). In Study 1, participants (N = 156) completed a series of trials of an objective task to provide numerical estimates of the number of white dots pictured on black backgrounds. Results showed that participants conformed more with the AI's responses than the human ones. In Study 2, participants (N = 102) in a series of subjective tasks observed evocative images associated with two concepts ostensibly provided, again, by an AI or a human. Then, they rated how each concept described the images appropriately. Unlike the objective task, in the subjective one, participants conformed more with the human than the AI's responses. Overall, our findings show that under some circumstances, AI can influence people above and beyond the influence of other humans, offering new insights into social influence processes in the digital era.


Our research might offer new insights into social influence processes in the digital era. The results showed that people can conform more to non-human agents (than human ones) in a digital context under specific circumstances. For objective tasks eliciting uncertainty, people might be more prone to conform to AI agents than another human being, whereas for subjective tasks, other humans may continue to be the most credible source of influence compared with AI agents. These findings highlight the relevance of matching agents and the type of task to maximize social influence. Our findings could be important for non-human agent developers, showing under which circumstances a human is more prone to follow the guidance of non-human agents. Proposing a non-human agent in a task in which it is not so trusted could be suboptimal. Conversely, in objective-type tasks that elicit uncertainty, it might be advantageous to emphasize the nature of the agent as artificial intelligence, rather than trying to disguise the agent as human (as some existing chatbots tend to do). In conclusion, it is important to consider, on the one hand, that non-human agents can become credible sources of social influence and, on the other hand, the match between the type of agent and the type of task.


The first study found that people conformed more to AI than to human sources on objective tasks, such as estimating the number of white dots on a black background. The second study found that people conformed more to human than to AI sources on subjective tasks, such as attributing meaning to evocative images.

The authors conclude that the findings of their studies suggest that AI can be a powerful source of social influence, especially on objective tasks. However, they also note that the literature on AI and social influence is still limited, and more research is needed to understand the conditions under which AI can be more or less influential than human sources.

Key points:
  • The spread of AI technologies is increasing the chances of daily interactions between humans and AI.
  • Research has shown that people can be influenced by AI on objective tasks, but they may be more influenced by humans on subjective tasks.
  • More research is needed to understand the conditions under which AI can be more or less influential than human sources.

Friday, March 4, 2022

Social media really is making us more morally outraged

Charlotte Hu
Popular Science
updated 13 AUG 21

Here is an excerpt:

The most interesting finding for the team was that some of the more politically moderate people tended to be the ones who are influenced by social feedback the most. “What we know about social media now is that a lot of the political content we see is actually produced by a minority of users—the more extreme users,” Brady says. 

One question that’s come out of this study is: what are the conditions under which moderate users either become more socially influenced to conform to a more extreme tone, as opposed to just get turned off by it and leave the platform, or don’t engage any more? “I think both of these potential directions are important because they both imply that the average tone of conversation on the platform will get increasingly extreme.”

Social media can exploit base human psychology

Moral outrage is a natural tendency. “It’s very deeply ingrained in humans, it happens online, offline, everyone, but there is a sense that the design of social media can amplify in certain contexts this natural tendency we have,” Brady says. But moral outrage is not always bad. It can have important functions, and therefore, “it’s not a clear-cut answer that we want to reduce moral outrage.”

“There’s a lot of data now that suggest that negative content does tend to draw in more engagement on the average than positive content,” says Brady. “That being said, there are lots of contexts where positive content does draw engagement. So it’s definitely not a universal law.” 

It’s likely that multiple factors are fueling this trend. People could be attracted to posts that are more popular or go viral on social media, and past studies have shown that we want to know what the gossip is and what people are doing wrong. But the more people engage with these types of posts, the more platforms push them to us. 

Jonathan Nagler, a co-director of NYU Center for Social Media and Politics, who was not involved in the study, says it’s not shocking that moral outrage gets rewarded and amplified on social media. 

Monday, November 15, 2021

On Defining Moral Enhancement: A Clarificatory Taxonomy

Carl Jago
Journal of Experimental Social Psychology
Volume 95, July 2021, 104145


In a series of studies, we ask whether and to what extent the base rate of a behavior influences associated moral judgment. Previous research aimed at answering different but related questions are suggestive of such an effect. However, these other investigations involve injunctive norms and special reference groups which are inappropriate for an examination of the effects of base rates per se. Across five studies, we find that, when properly isolated, base rates do indeed influence moral judgment, but they do so with only very small effect sizes. In another study, we test the possibility that the very limited influence of base rates on moral judgment could be a result of a general phenomenon such as the fundamental attribution error, which is not specific to moral judgment. The results suggest that moral judgment may be uniquely resilient to the influence of base rates. In a final pair of studies, we test secondary hypotheses that injunctive norms and special reference groups would inflate any influence on moral judgments relative to base rates alone. The results supported those hypotheses.

From the General Discussion

In multiple experiments aimed at examining the influence of base rates per se, we found that base rates do indeed influence judgments, but the size of the effect we observed was very small. We considered that, in
discovering moral judgments’ resilience to influence from base rates, we may have only rediscovered a general tendency, such as the fundamental attribution error, whereby people discount situational factors. If
so, this tendency would then also apply broadly to non-moral scenarios. We therefore conducted another study in which our experimental materials were modified so as to remove the moral components. We found a substantial base-rate effect on participants’ judgments of performance regarding non-moral behavior. This finding suggests that the resilience to base rates observed in the preceding studies is unlikely the result of amore general tendency, and may instead be unique to moral judgment.

The main reasons why we concluded that the results from the most closely related extant research could not answer the present research question were the involvement in those studies of injunctive norms and
special reference groups. To confirm that these factors could inflate any influence of base rates on moral judgment, in the final pair of studies, we modified our experiments so as to include them. Specifically, in one study, we crossed prescriptive and proscriptive injunctive norms with high and low base rates and found that the impact of an injunctive norm outweighs any impact of the base rate. In the other study, we found that simply mentioning, for example, that there were some good people among those who engaged in a high base-rate behavior resulted in a large effect on moral judgment; not only on judgments of the target’s character, but also on judgments of blame and wrongness. 

Saturday, July 3, 2021

Binding moral values gain importance in the presence of close others

Yudkin, D.A., Gantman, A.P., Hofmann, W. et al. 
Nat Commun 12, 2718 (2021). 


A key function of morality is to regulate social behavior. Research suggests moral values may be divided into two types: binding values, which govern behavior in groups, and individualizing values, which promote personal rights and freedoms. Because people tend to mentally activate concepts in situations in which they may prove useful, the importance they afford moral values may vary according to whom they are with in the moment. In particular, because binding values help regulate communal behavior, people may afford these values more importance when in the presence of close (versus distant) others. Five studies test and support this hypothesis. First, we use a custom smartphone application to repeatedly record participants’ (n = 1166) current social context and the importance they afforded moral values. Results show people rate moral values as more important when in the presence of close others, and this effect is stronger for binding than individualizing values—an effect that replicates in a large preregistered online sample (n = 2016). A lab study (n = 390) and two preregistered online experiments (n = 580 and n = 752) provide convergent evidence that people afford binding, but not individualizing, values more importance when in the real or imagined presence of close others. Our results suggest people selectively activate different moral values according to the demands of the situation, and show how the mere presence of others can affect moral thinking.

From the Discussion

Our findings converge with work highlighting the practical contexts where binding values are pitted against individualizing ones. Research on the psychology of whistleblowing, for example, suggests that the decision over whether to report unethical behavior in one’s own organization reflects a tradeoff between loyalty (to one’s community) and fairness (to society in general). Other research has found that increasing or decreasing people’s “psychological distance” from a situation affects the degree to which they apply binding versus individualizing principles. For example, research shows that prompting people to take a detached (versus immersed) perspective on their own actions renders them more likely to apply impartial principles in punishing close others for moral transgressions. By contrast, inducing feelings of empathy toward others (which could be construed as increasing feelings of psychological closeness) increases people’s likelihood of showing favoritism toward them in violation of general fairness norms. Our work highlights a psychological process that might help to explain these patterns of behavior: people are more prone to act according to binding values when they are with close others precisely because that relational context activates those values in the mind.

Sunday, March 7, 2021

Why do inequality and deprivation produce high crime and low trust?

De Courson, B., Nettle, D. 
Sci Rep 11, 1937 (2021). 


Humans sometimes cooperate to mutual advantage, and sometimes exploit one another. In industrialised societies, the prevalence of exploitation, in the form of crime, is related to the distribution of economic resources: more unequal societies tend to have higher crime, as well as lower social trust. We created a model of cooperation and exploitation to explore why this should be. Distinctively, our model features a desperation threshold, a level of resources below which it is extremely damaging to fall. Agents do not belong to fixed types, but condition their behaviour on their current resource level and the behaviour in the population around them. We show that the optimal action for individuals who are close to the desperation threshold is to exploit others. This remains true even in the presence of severe and probable punishment for exploitation, since successful exploitation is the quickest route out of desperation, whereas being punished does not make already desperate states much worse. Simulated populations with a sufficiently unequal distribution of resources rapidly evolve an equilibrium of low trust and zero cooperation: desperate individuals try to exploit, and non-desperate individuals avoid interaction altogether. Making the distribution of resources more equal or increasing social mobility is generally effective in producing a high cooperation, high trust equilibrium; increasing punishment severity is not.

From the Discussion

Within criminology, our prediction of risky exploitative behaviour when in danger of falling below a threshold of desperation is reminiscent of Merton’s strain theory of deviance. Under this theory, deviance results when individuals have a goal (remaining constantly above the threshold of participation in society), but the available legitimate means are insufficient to get them there (neither foraging alone nor cooperation has a large enough one-time payoff). They thus turn to risky alternatives, despite the drawbacks of these (see also Ref.32 for similar arguments). This explanation is not reducible to desperation making individuals discount the future more steeply, which is often invoked as an explanation for criminality. Agents in our model do not face choices between smaller-sooner and larger-later rewards; the payoff for exploitation is immediate, whether successful or unsuccessful. Also note the philosophical differences between our approach and ‘self-control’ styles of explanation. Those approaches see offending as deficient decision-making: it would be in people’s interests not to offend, but some can’t manage it (see Ref.35 for a critical review). Like economic and behavioural-ecological theories of crime more generally, ours assumes instead that there are certain situations or states where offending is the best of a bad set of available options.

Sunday, February 28, 2021

How peer influence shapes value computation in moral decision-making

Yu, H., Siegel, J., Clithero, J., & Crockett, M. 
(2021, January 16).


Moral behavior is susceptible to peer influence. How does information from peers influence moral preferences? We used drift-diffusion modeling to show that peer influence changes the value of moral behavior by prioritizing the choice attributes that align with peers’ goals. Study 1 (N = 100; preregistered) showed that participants accurately inferred the goals of prosocial and antisocial peers when observing their moral decisions. In Study 2 (N = 68), participants made moral decisions before and after observing the decisions of a prosocial or antisocial peer. Peer observation caused participants’ own preferences to resemble those of their peers. This peer influence effect on value computation manifested as an increased weight on choice attributes promoting the peers’ goals that occurred independently from peer influence on initial choice bias. Participants’ self-reported awareness of influence tracked more closely with computational measures of prosocial than antisocial influence. Our findings have implications for bolstering and blocking the effects of prosocial and antisocial influence on moral behavior.

Sunday, September 13, 2020

Correlation not Causation: The Relationship between Personality Traits and Political Ideologies

B. Verhulst, L. J. Evans, & P. K. Hatemi
Am J Pol Sci. 2012 ; 56(1): 34–51.


The assumption in the personality and politics literature is that a person's personality motivates them to develop certain political attitudes later in life. This assumption is founded on the simple correlation between the two constructs and the observation that personality traits are genetically influenced and develop in infancy, whereas political preferences develop later in life. Work in psychology, behavioral genetics, and recently political science, however, has demonstrated that political preferences also develop in childhood and are equally influenced by genetic factors. These findings cast doubt on the assumed causal relationship between personality and politics. Here we test the causal relationship between personality traits and political attitudes using a direction of causation structural model on a genetically informative sample. The results suggest that personality traits do not cause people to develop political attitudes; rather, the correlation between the two is a function of an innate common underlying genetic factor.

From the Discussion section

Based on the current results, the claim that personality traits lead to political orientations should no longer be assumed, but explicitly tested for each personality and political trait prior to making any claims about their relationship. We recognize that no single analysis can provide a definitive answer to such a complex question, and our analysis did not include the Agreeableness, Conscientiousness, and Openness Five-Factor Model measures. Future studies which use different personality measures, or other methodological designs, including panel studies that examine the developmental trajectories of personality and attitudes from childhood to adulthood, would be invaluable for investigating more nuanced relationships between personality traits and political attitudes. These would also include models which capture the nonrandom selection into environments that foster the development of more liberal or conservative political attitudes (active gene-environment covariation) as well as the possibility for differential expression of personality traits and political attitudes at different stages of the developmental process that may illuminate “critical periods” for the interface of personality and attitudes.

A link to the pdf can be found on this page.

Tuesday, March 24, 2020

The effectiveness of moral messages on public health behavioral intentions during the COVID-19 pandemic

J. Everett, C. Colombatta, & others
PsyArXiv PrePrints
Originally posted 20 March 20

With the COVID-19 pandemic threatening millions of lives, changing our behaviors to prevent the spread of the disease is a moral imperative. Here, we investigated the effectiveness of messages inspired by three major moral traditions on public health behavioral intentions. A sample of US participants representative for age, sex and race/ethnicity (N=1032) viewed messages from either a leader or citizen containing deontological, virtue-based, utilitarian, or non-moral justifications for adopting social distancing behaviors during the COVID-19 pandemic. We measured the messages’ effects on participants’ self-reported intentions to wash hands, avoid social gatherings, self-isolate, and share health messages, as well as their beliefs about others’ intentions, impressions of the messenger’s morality and trustworthiness, and beliefs about personal control and responsibility for preventing the spread of disease. Consistent with our pre-registered predictions, deontological messages had modest effects across several measures of behavioral intentions, second-order beliefs, and impressions of the messenger, while virtue-based messages had modest effects on personal responsibility for preventing the spread. These effects were observed for messages from leaders and citizens alike. Our findings are at odds with participants’ own beliefs about moral persuasion: a majority of participants predicted the utilitarian message would be most effective. We caution that these effects are modest in size, likely due to ceiling effects on our measures of behavioral intentions and strong heterogeneity across all dependent measures along several demographic dimensions including age, self-identified gender, self-identified race, political conservatism, and religiosity. Although the utilitarian message was the least effective among those tested, individual differences in one key dimension of utilitarianism—impartial concern for the greater good—were strongly and positively associated with public health intentions and beliefs. Overall, our preliminary results suggest that public health messaging focused on duties and responsibilities toward family, friends and fellow citizens will be most effective in slowing the spread of COVID-19 in the US. Ongoing work is investigating whether deontological persuasion generalizes across different populations, what aspects of deontological messages drive their persuasive effects, and how such messages can be most effectively delivered across global populations.

The research is here.

Friday, January 10, 2020

Ethically Adrift: How Others Pull Our Moral Compass from True North, and How we Can Fix It

Moore, C., and F. Gino.
Research in Organizational Behavior 
33 (2013): 53–77.


This chapter is about the social nature of morality. Using the metaphor of the moral compass to describe individuals' inner sense of right and wrong, we offer a framework to help us understand social reasons why our moral compass can come under others' control, leading even good people to cross ethical boundaries. Departing from prior work focusing on the role of individuals' cognitive limitations in explaining unethical behavior, we focus on the socio-psychological processes that function as triggers of moral neglect, moral justification and immoral action, and their impact on moral behavior. In addition, our framework discusses organizational factors that exacerbate the detrimental effects of each trigger. We conclude by discussing implications and recommendations for organizational scholars to take a more integrative approach to developing and evaluating theory about unethical behavior.

From the Summary

Even when individuals are aware of the ethical dimensions of the choices they are making, they may still engage in unethical behavior as long as they recruit justifications for it. In this section, we discussed the role of two social–psychological processes – social comparison and self-verification – that facilitate moral justification, which will lead to immoral behavior. We also discussed three characteristics of organizational life that amplify these social–psychological processes. Specifically, we discussed how organizational identification, group loyalty, and framing or euphemistic language can all affect the likelihood and extent to which individuals justify their actions, by judging them as ethical when in fact they are morally contentious. Finally, we discussed moral disengagement, moral hypocrisy, and moral licensing as intrapersonal consequences of these social facilitators of moral justification.

The paper can be downloaded here.

Sunday, October 21, 2018

Leaders matter morally: The role of ethical leadership in shaping employee moral cognition and misconduct.

Moore, C., Mayer, D. M., Chiang, and others
Journal of Applied Psychology. Advance online publication.


There has long been interest in how leaders influence the unethical behavior of those who they lead. However, research in this area has tended to focus on leaders’ direct influence over subordinate behavior, such as through role modeling or eliciting positive social exchange. We extend this research by examining how ethical leaders affect how employees construe morally problematic decisions, ultimately influencing their behavior. Across four studies, diverse in methods (lab and field) and national context (the United States and China), we find that ethical leadership decreases employees’ propensity to morally disengage, with ultimate effects on employees’ unethical decisions and deviant behavior. Further, employee moral identity moderates this mediated effect. However, the form of this moderation is not consistent. In Studies 2 and 4, we find that ethical leaders have the largest positive influence over individuals with a weak moral identity (providing a “saving grace”), whereas in Study 3, we find that ethical leaders have the largest positive influence over individuals with a strong moral identity (catalyzing a “virtuous synergy”). We use these findings to speculate about when ethical leaders might function as a “saving grace” versus a “virtuous synergy.” Together, our results suggest that employee misconduct stems from a complex interaction between employees, their leaders, and the context in which this relationship takes place, specifically via leaders’ influence over employees’ moral cognition.

Beginning of the Discussion section

Three primary findings emerge from these four studies. First, we consistently find a negative relationship between ethical leadership and employee moral disengagement. This supports our primary hypothesis: leader behavior is associated with how employees construe decisions with ethical import. Our manipulation of ethical leadership and its resulting effects provide confidence that ethical leadership has a direct causal influence over employee moral disengagement.

In addition, this finding was consistent in both American and Chinese work contexts, suggesting the effect is not culturally bound.

Second, we also found evidence across all four studies that moral disengagement functions as a mechanism to explain the relationship between ethical leadership and employee unethical decisions and behaviors. Again, this result was consistent across time- and respondent-separated field studies and an experiment, in American and Chinese organizations, and using different measures of our primary constructs, providing important assurance of the generalizability of our findings and bolstering our confidence that moral disengagement as an important, unique, and robust mechanism to explain ethical leaders’ positive effects within their organizations.

Finally, we found persistent evidence that the centrality of an employee’s moral identity plays a key role in the relationship between ethical leadership and employee unethical decisions and behavior (through moral disengagement). However, the nature of this moderated relationship varied across studies.

Wednesday, July 12, 2017

Emotion shapes the diffusion of moralized content in social networks

William J. Brady, Julian A. Wills, John T. Jost, Joshua A. Tucker, and Jay J. Van Bavel
PNAS 2017 ; published ahead of print June 26, 2017


Political debate concerning moralized issues is increasingly common in online social networks. However, moral psychology has yet to incorporate the study of social networks to investigate processes by which some moral ideas spread more rapidly or broadly than others. Here, we show that the expression of moral emotion is key for the spread of moral and political ideas in online social networks, a process we call “moral contagion.” Using a large sample of social media communications about three polarizing moral/political issues (n = 563,312), we observed that the presence of moral-emotional words in messages increased their diffusion by a factor of 20% for each additional word. Furthermore, we found that moral contagion was bounded by group membership; moral-emotional language increased diffusion more strongly within liberal and conservative networks, and less between them. Our results highlight the importance of emotion in the social transmission of moral ideas and also demonstrate the utility of social network methods for studying morality. These findings offer insights into how people are exposed to moral and political ideas through social networks, thus expanding models of social influence and group polarization as people become increasingly immersed in social media networks.

The research is here.

Friday, May 20, 2016

Sleep Deprivation and Advice Taking

Jan Alexander Häusser, Johannes Leder, Charlene Ketturat, Martin Dresler & Nadira Sophie Faber
Scientific Reports 6, Article number: 24386 (2016)


Judgements and decisions in many political, economic or medical contexts are often made while sleep deprived. Furthermore, in such contexts individuals are required to integrate information provided by – more or less qualified – advisors. We asked if sleep deprivation affects advice taking. We conducted a 2 (sleep deprivation: yes vs. no) ×2 (competency of advisor: medium vs. high) experimental study to examine the effects of sleep deprivation on advice taking in an estimation task. We compared participants with one night of total sleep deprivation to participants with a night of regular sleep. Competency of advisor was manipulated within subjects. We found that sleep deprived participants show increased advice taking. An interaction of condition and competency of advisor and further post-hoc analyses revealed that this effect was more pronounced for the medium competency advisor compared to the high competency advisor. Furthermore, sleep deprived participants benefited more from an advisor of high competency in terms of stronger improvement in judgmental accuracy than well-rested participants.

The article is here.

Thursday, June 4, 2015

The neural pathways, development and functions of empathy

Jean Decety
Current Opinion in Behavioral Sciences
Volume 3, June 2015, Pages 1–6

Empathy reflects an innate ability to perceive and be sensitive to the emotional states of others coupled with a motivation to care for their wellbeing. It has evolved in the context of parental care for offspring as well as within kinship. Current work demonstrates that empathy is underpinned by circuits connecting the brainstem, amygdala, basal ganglia, anterior cingulate cortex, insula and orbitofrontal cortex, which are conserved across many species. Empirical studies document that empathetic reactions emerge early in life, and that they are not automatic. Rather they are heavily influenced and modulated by interpersonal and contextual factors, which impact behavior and cognitions. However, the mechanisms supporting empathy are also flexible and amenable to behavioral interventions that can promote caring beyond kin and kith.

The entire article is here.

Sunday, January 4, 2015

The Ethics of Nudging

By Cass Sunstein
Harvard Law School

This essay defends the following propositions. (1) It is pointless to object to choice architecture or nudging as such. Choice architecture cannot be avoided. Nature itself nudges; so does the weather; so do spontaneous orders and invisible hands. The private sector inevitably nudges, as does the government. It is reasonable to object to particular nudges, but not to nudging in general. (2) In this context, ethical abstractions (for example, about autonomy, dignity, and manipulation) can create serious confusion. To make progress, those abstractions must be brought into contact with concrete practices. Nudging and choice architecture take diverse forms, and the force of an ethical objection depends on the specific form. (3) If welfare is our guide, much nudging is actually required on ethical grounds. (4) If autonomy is our guide, much nudging is also required on ethical grounds. (5) Choice architecture should not, and need not, compromise either dignity or self-government, though imaginable forms could do both. (6) Some nudges are objectionable because the choice architect has illicit ends. When the ends are legitimate, and when nudges are fully transparent and subject to public scrutiny, a convincing ethical objection is less likely to be available. (7) There is, however, room for ethical objections in the case of well-motivated but manipulative interventions, certainly if people have not consented to them; such nudges can undermine autonomy and dignity. It follows that both the concept and the practice of manipulation deserve careful attention. The concept of manipulation has a core and a periphery; some interventions fit within the core, others within the periphery, and others outside of both.

The entire article is here.

Tuesday, June 10, 2014

I Don't Want to Be Right

By Maria Konnikova
The New Yorker
Originally published May 19, 2013

Last month, Brendan Nyhan, a professor of political science at Dartmouth, published the results of a study that he and a team of pediatricians and political scientists had been working on for three years. They had followed a group of almost two thousand parents, all of whom had at least one child under the age of seventeen, to test a simple relationship: Could various pro-vaccination campaigns change parental attitudes toward vaccines? Each household received one of four messages: a leaflet from the Centers for Disease Control and Prevention stating that there had been no evidence linking the measles, mumps, and rubella (M.M.R.) vaccine and autism; a leaflet from the Vaccine Information Statement on the dangers of the diseases that the M.M.R. vaccine prevents; photographs of children who had suffered from the diseases; and a dramatic story from a Centers for Disease Control and Prevention about an infant who almost died of measles. A control group did not receive any information at all. The goal was to test whether facts, science, emotions, or stories could make people change their minds.

The result was dramatic: a whole lot of nothing. None of the interventions worked.

The entire article is here.

Sunday, May 25, 2014

What Happens Before? A Field Experiment Exploring How Pay and Representation Differentially Shape Bias on the Pathway into Organizations

By Katherine Milkman, Modupe Akinola, and Dolly Chugh
Originally posted April 23, 2014


Little is known about how bias against women and minorities varies within and between organizations or how it manifests before individuals formally apply to organizations. We address this knowledge gap through an audit study in academia of over 6,500 professors at top U.S. universities drawn from 89 disciplines and 259 institutions. We hypothesized that discrimination would appear at the informal “pathway” preceding entry to academia and would vary by discipline and university as a function of faculty representation and pay. In our experiment, professors were contacted by fictional prospective students seeking to discuss research opportunities prior to applying to a doctoral program. Names of students were randomly assigned to signal gender and race (Caucasian, Black, Hispanic, Indian, Chinese), but messages were otherwise identical. We found that faculty ignored requests from women and minorities at a higher rate than requests from White males, particularly in higher-paying disciplines and private institutions. Counterintuitively, the representation of women and minorities and bias were uncorrelated, suggesting that greater representation cannot be assumed to reduce bias. This research highlights the importance of studying what happens before formal entry points into organizations and reveals that discrimination is not evenly distributed within and between organizations.

The entire research paper is here.

Monday, December 23, 2013

Should Studying Philosophy Change Us?

By Benjamin Barer
Huffington Post Blog
Originally posted on December 9, 2013

The years I spent studying academic (Western) philosophy during my undergraduate education were quite transformative. I was presented, and equipped, with a vocabulary to use in talking about the most difficult and timeless of issues, connecting myself to a strong tradition of other minds who devoted their lives to doing the same. Because of how meaningful I found the study of philosophy, it has remained puzzling to me that, almost without exception, no one else who I have encountered who studied philosophy took away from it what I did.

The entire post is here.

Saturday, December 21, 2013

Ethical Considerations in the Development and Application of Mental and Behavioral Nosologies: Lessons from DSM-5

By Robert M. Gordon and Lisa Cosgrove
Psychological Injury and Law
December 13, 2013


We are not likely to find a diagnostic system as “unethical,” per se, but rather find that it creates ethical concerns in its formulation and application. There is an increased risk of misuse and misunderstanding of the DSM-5 particularly when applied to forensic assessment because of documented problems with reliability and validity. For example, when field tested, the American Psychiatric Association reported diagnostic category kappa levels as acceptable that were far below the standard level of acceptability. The DSM-5 does not offer sensitivity and specificity levels and thus psychologists must keep this in mind when using or teaching this manual. Also, especially in light of concerns about diagnostic inflation, we recommend that psychologists exercise caution when using the DSM-5 in forensic assessments, including civil and criminal cases. Alternatives to the DSM-5, such as the International Classification of Diseases and the Psychodynamic Diagnostic Manual are reviewed.

Here is an excerpt:

It should be emphasized that ethical concerns about DSM-5 panel members having commercial ties is not meant in any way to imply that any task force or work group member intentionally made pro- industry decisions. Decades of research have demonstrated that cognitive biases are commonplace and very difficult to eradicate, and more recent studies suggest that disclosure of financial conflicts of interest may actually worsen bias (Dana & Lowenstein, 2003). This is because bias is most often manifested in subtle ways unbeknownst to the researcher or clinician, and thus is usually implicit and unintentional. Physicians—like everyone else—have ethical blind spots. Social scientists have documented the fact that physicians often fail to recognize their vulnerability to commercial interests because they mistakenly believe that they are immune to marketing and industry influence (Sah & Faugh-Burman, 2013).

The entire article is here.

Thursday, November 7, 2013

The Not-So-Hidden Cause Behind the A.D.H.D. Epidemic

The New York Times
Published: October 15, 2013

Here are two excerpts:

Of the 6.4 million kids who have been given diagnoses of A.D.H.D., a large percentage are unlikely to have any kind of physiological difference that would make them more distractible than the average non-A.D.H.D. kid. It’s also doubtful that biological or environmental changes are making physiological differences more prevalent. Instead, the rapid increase in people with A.D.H.D. probably has more to do with sociological factors — changes in the way we school our children, in the way we interact with doctors and in what we expect from our kids.

Which is not to say that A.D.H.D. is a made-up disorder.


This lack of rigor leaves room for plenty of diagnoses that are based on something other than biology. Case in point: The beginning of A.D.H.D. as an “epidemic” corresponds with a couple of important policy changes that incentivized diagnosis. The incorporation of A.D.H.D. under the Individuals With Disabilities Education Act in 1991 — and a subsequent overhaul of the Food and Drug Administration in 1997 that allowed drug companies to more easily market directly to the public — were hugely influential, according to Adam Rafalovich, a sociologist at Pacific University in Oregon.

The entire article is here.