Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Context. Show all posts
Showing posts with label Context. Show all posts

Saturday, June 10, 2023

Generative AI entails a credit–blame asymmetry

Porsdam Mann, S., Earp, B. et al. (2023).
Nature Machine Intelligence.

The recent releases of large-scale language models (LLMs), including OpenAI’s ChatGPT and GPT-4, Meta’s LLaMA, and Google’s Bard have garnered substantial global attention, leading to calls for urgent community discussion of the ethical issues involved. LLMs generate text by representing and predicting statistical properties of language. Optimized for statistical patterns and linguistic form rather than for
truth or reliability, these models cannot assess the quality of the information they use.

Recent work has highlighted ethical risks that are associated with LLMs, including biases that arise from training data; environmental and socioeconomic impacts; privacy and confidentiality risks; the perpetuation of stereotypes; and the potential for deliberate or accidental misuse. We focus on a distinct set of ethical questions concerning moral responsibility—specifically blame and credit—for LLM-generated
content. We argue that different responsibility standards apply to positive and negative uses (or outputs) of LLMs and offer preliminary recommendations. These include: calls for updated guidance from policymakers that reflect this asymmetry in responsibility standards; transparency norms; technology goals; and the establishment of interactive forums for participatory debate on LLMs.‌


Credit–blame asymmetry may lead to achievement gaps

Since the Industrial Revolution, automating technologies have made workers redundant in many industries, particularly in agriculture and manufacturing. The recent assumption25 has been that creatives
and knowledge workers would remain much less impacted by these changes in the near-to-mid-term future. Advances in LLMs challenge this premise.

How these trends will impact human workforces is a key but unresolved question. The spread of AI-based applications and tools such as LLMs will not necessarily replace human workers; it may simply
shift them to tasks that complement the functions of the AI. This may decrease opportunities for human beings to distinguish themselves or excel in workplace settings. Their future tasks may involve supervising or maintaining LLMs that produce the sorts of outputs (for example, text or recommendations) that skilled human beings were previously producing and for which they were receiving credit. Consequently, work in a world relying on LLMs might often involve ‘achievement gaps’ for human beings: good, useful outcomes will be produced, but many of them will not be achievements for which human workers and professionals can claim credit.

This may result in an odd state of affairs. If responsibility for positive and negative outcomes produced by LLMs is asymmetrical as we have suggested, humans may be justifiably held responsible for negative outcomes created, or allowed to happen, when they or their organizations make use of LLMs. At the same time, they may deserve less credit for AI-generated positive outcomes, as they may not be displaying the skills and talents needed to produce text, exerting judgment to make a recommendation, or generating other creative outputs.

Saturday, April 1, 2023

The effect of reward prediction errors on subjective affect depends on outcome valence and decision context

Forbes, L., & Bennett, D. (2023, January 20). 


The valence of an individual’s emotional response to an event is often thought to depend on their prior expectations for the event: better-than-expected outcomes produce positive affect and worse-than-expected outcomes produce negative affect. In recent years, this hypothesis has been instantiated within influential computational models of subjective affect that assume the valence of affect is driven by reward prediction errors. However, there remain a number of open questions regarding this association. In this project, we investigated the moderating effects of outcome valence and decision context (Experiment 1: free vs. forced choices; Experiment 2: trials with versus trials without counterfactual feedback) on the effects of reward prediction errors on subjective affect. We conducted two large-scale online experiments (N = 300 in total) of general-population samples recruited via Prolific to complete a risky decision-making task with embedded high-resolution sampling of subjective affect. Hierarchical Bayesian computational modelling revealed that the effects of reward prediction errors on subjective affect were significantly moderated by both outcome valence and decision context. Specifically, after accounting for concurrent reward amounts we found evidence that only negative reward prediction errors (worse-than-expected outcomes) influenced subjective affect, with no significant effect of positive reward prediction errors (better-than-expected outcomes). Moreover, these effects were only apparent on trials in which participants made a choice freely (but not on forced-choice trials) and when counterfactual feedback was absent (but not when counterfactual feedback was present). These results deepen our understanding of the effects of reward prediction errors on subjective affect.

From the General Discussion section

Our findings were twofold: first, we found that after accounting for the effects of concurrent reward amounts (gains/losses of points) on affect, the effects of RPEs were subtler and more nuanced than has been previously appreciated. Specifically, contrary to previous research, we found that only negative RPEs influenced subjective affect within our task, with no discernible effect of positive RPEs.  Second, we found that even the effect of negative RPEs on affect was dependent on the decision context within which the RPEs occurred.  We manipulated two features of decision context (Experiment 1: free-choice versus forced-choice trials; Experiment 2: trials with counterfactual feedback versus trials without counterfactual feedback) and found that both features of decision context significantly moderated the effect of negative RPEs on subjective affect. In Experiment 1, we found that negative RPEs only influenced subjective affect in free-choice trials, with no effect of negative RPEs in forced-choice trials. In Experiment 2, we similarly found that negative RPEs only influenced subjective affect when counterfactual feedback was absent, with no effect of negative RPEs when counterfactual feedback was present. We unpack and discuss each of these results separately below.

Editor's synopsis: As with large amounts of other research, "bad" is stronger than "good" in making appraisals and decisions, in context of free (not forced) choice and no counterfactual information available.

Important data points when working with patient who are making large life decisions.

Thursday, August 6, 2020

Influencing choices with conversational primes: How a magic trick unconsciously influences card choices

Alice Pailhès and Gustav Kuhn
PNAS, Jul 2020, 202000682
DOI: 10.1073/pnas.2000682117


Past research demonstrates that unconscious primes can affect people’s decisions. However, these free choice priming paradigms present participants with very few alternatives. Magicians’ forcing techniques provide a powerful tool to investigate how natural implicit primes can unconsciously influence decisions with multiple alternatives. We used video and live performances of the mental priming force. This technique uses subtle nonverbal and verbal conversational primes to influence spectators to choose the three of diamonds. Our results show that a large number of participants chose the target card while reporting feeling free and in control of their choice. Even when they were influenced by the primes, participants typically failed to give the reason for their choice. These results show that naturally embedding primes within a person’s speech and gestures effectively influenced people’s decision making. This raises the possibility that this form of mind control could be used to effectively manipulate other mental processes.


This paper shows that naturally embedding primes within a person’s speech and gestures effectively influences people’s decision making. Likewise, our results dovetail findings from choice blindness literature, illustrating that people often do not know the real reason for their choice. Magicians’ forcing techniques may provide a powerful and reliable way of studying these mental processes, and our paper illustrates how this can be done. Moreover, our results raise the possibility that this form of mind control could be used to effectively manipulate other mental processes.

A pdf of the research can be found here.

Editor's Note: This research has implications of how psychologists may consciously or unconsciously influence patient choices.

Monday, February 10, 2020

Can Robots Reduce Racism And Sexism?

Kim Elsesser
Originally posted 16 Jan 20

Robots are becoming a regular part of our workplaces, serving as supermarket cashiers and building our cars. More recently they’ve been tackling even more complicated tasks like driving and sensing emotions. Estimates suggest that about half of the work humans currently do will be automated by 2055, but there may be a silver lining to the loss of human jobs to robots. New research indicates that robots at work can help reduce prejudice and discrimination.

Apparently, just thinking about robot workers leads people to think they have more in common with other human groups according to research published in American Psychologist. When the study participants’ awareness of robot workers increased, they became more accepting of immigrants and people of a different religion, race, and sexual orientation.

Basically, the robots reduced prejudice by highlighting the existence of a group that is not human. Study authors, Joshua Conrad Jackson, Noah Castelo and Kurt Gray, summarized, “The large differences between humans and robots may make the differences between humans seem smaller than they normally appear. Christians and Muslims have different beliefs, but at least both are made from flesh and blood; Latinos and Asians may eat different foods, but at least they eat.” Instead of categorizing people by race or religion, thinking about robots made participants more likely to think of everyone as belonging to one human category.

The info is here.

Sunday, October 20, 2019

Moral Judgment and Decision Making

Bartels, D. M., and others (2015)
In G. Keren & G. Wu (Eds.)
The Wiley Blackwell Handbook of Judgment and Decision Making.

From the Introduction

Our focus in this essay is moral flexibility, a term that we use to capture to the thesis that people are strongly motivated to adhere to and affirm their moral beliefs in their judgments and choices—they really want to get it right, they really want to do the right thing—but context strongly influences which moral beliefs are brought to bear in a given situation (cf. Bartels, 2008). In what follows, we review contemporary research on moral judgment and decision making and suggest ways that the major themes in the literature relate to the notion of moral flexibility. First, we take a step back and explain what makes moral judgment and decision making unique. We then review three major research themes and their explananda: (i) morally prohibited value tradeoffs in decision making, (ii) rules, reason, and emotion in tradeoffs, and (iii) judgments of moral blame and punishment. We conclude by commenting on methodological desiderata and presenting understudied areas of inquiry.


Moral thinking pervades everyday decision making, and so understanding the psychological underpinnings of moral judgment and decision making is an important goal for the behavioral sciences. Research that focuses on rule-based models makes moral decisions appear straightforward and rigid, but our review suggests that they more complicated. Our attempt to document the state of the field reveals the diversity of approaches that (indirectly) reveals the flexibility of moral decision making systems. Whether they are study participants, policy makers, or the person on the street, people are strongly motivated to adhere to and affirm their moral beliefs—they want to make the right judgments and choices, and do the right thing. But what is right and wrong, like many things, depends in part on the situation. So while moral judgments and choices can be accurately characterized as using moral rules, they are also characterized by a striking ability to adapt to situations that require flexibility.

Consistent with this theme, our review suggests that context strongly influences which moral principles people use to judge actions and actors and that apparent inconsistencies across situations need not be interpreted as evidence of moral bias, error, hypocrisy, weakness, or failure.  One implication of the evidence for moral flexibility we have presented is that it might be difficult for any single framework to capture moral judgments and decisions (and this may help explain why no fully descriptive and consensus model of moral judgment and decision making exists despite decades of research). While several interesting puzzle pieces have been identified, the big picture remains unclear. We cannot even be certain that all of these pieces belong to just one puzzle.  Fortunately for researchers interested in this area, there is much left to be learned, and we suspect that the coming decades will budge us closer to a complete understanding of moral judgment and decision making.

A pdf of the book chapter can be downloaded here.

Wednesday, June 26, 2019

The computational and neural substrates of moral strategies in social decision-making

Jeroen M. van Baar, Luke J. Chang & Alan G. Sanfey
Nature Communications, Volume 10, Article number: 1483 (2019)


Individuals employ different moral principles to guide their social decision-making, thus expressing a specific ‘moral strategy’. Which computations characterize different moral strategies, and how might they be instantiated in the brain? Here, we tackle these questions in the context of decisions about reciprocity using a modified Trust Game. We show that different participants spontaneously and consistently employ different moral strategies. By mapping an integrative computational model of reciprocity decisions onto brain activity using inter-subject representational similarity analysis of fMRI data, we find markedly different neural substrates for the strategies of ‘guilt aversion’ and ‘inequity aversion’, even under conditions where the two strategies produce the same choices. We also identify a new strategy, ‘moral opportunism’, in which participants adaptively switch between guilt and inequity aversion, with a corresponding switch observed in their neural activation patterns. These findings provide a valuable view into understanding how different individuals may utilize different moral principles.


From the Discussion

We also report a new strategy observed in participants, moral opportunism. This group did not consistently apply one moral rule to their decisions, but rather appeared to make a motivational trade-off depending on the particular trial structure. This opportunistic decision strategy entailed switching between the behavioral patterns of guilt aversion and inequity aversion, and allowed participants to maximize their financial payoff while still always following a moral rule. Although it could have been the case that these opportunists merely resembled GA and IA in terms of decision outcome, and not in the underlying psychological process, a confirmatory analysis showed that the moral opportunists did in fact switch between the neural representations of guilt and inequity aversion, and thus flexibly employed the respective psychological processes underlying these two, quite different, social preferences. This further supports our interpretation that the activity patterns directly reflect guilt aversion and inequity aversion computations, and not a theoretically peripheral “third factor” shared between GA or IA participants. Additionally, we found activity patterns specifically linked to moral opportunism in the superior parietal cortex and dACC, which are strongly associated with cognitive control and working memory.

The research is here.

Friday, June 7, 2019

Trading morality for a good economy

Michael Gerson
Originally posted May 28, 2019

Here is an excerpt:

Bennett went on to talk about how capitalism itself depends on good private character; how our system of government requires leaders of integrity; how failings of character can't be neatly compartmentalized. "A president whose character manifests itself in patterns of reckless personal conduct, deceit, abuse of power and contempt for the rule of law," he wrote, "cannot be a good president."

Above all, Bennett argued that the cultivation of character depends on the principled conduct of those in positions of public trust. "During moments of crisis," he wrote, "of unfolding scandal, people watch closely. They learn from what they see. And they often embrace a prevailing attitude and ethos, and employ what seems to work for others. So it matters if the legacy of the president is that the ends justify the means; that rules do not apply across the board; that lawlessness can be excused. It matters, too, if we demean the presidency by lowering our standards of expectations for the office and by redefining moral authority down. It matters if truth becomes incidental, and public office is used to cover up misdeeds. And it matters if we treat a president as if he were a king, above the law."

All this was written while Bill Clinton was president. And Bennett himself now seems reluctant to apply these rules "across the board" to a Republican president. This is not unusual. It is the political norm to ignore the poor character of politicians we agree with. But this does nothing to discredit Bennett's argument.

If you are a sexual harasser who wants to escape consequences, or a businessperson who habitually plays close to ethical lines, your hour has come. If you dream of having a porn-star mistress, or hope to game the tax system for your benefit, you have found your man and your moment. For all that is bent and sleazy, for all that is dishonest and dodgy, these are the golden days.

The info is here.

Monday, May 6, 2019

How do we make moral decisions?

Dartmouth College
Press Release
Originally released April 18, 2019

When it comes to making moral decisions, we often think of the golden rule: do unto others as you would have them do unto you. Yet, why we make such decisions has been widely debated. Are we motivated by feelings of guilt, where we don't want to feel bad for letting the other person down? Or by fairness, where we want to avoid unequal outcomes? Some people may rely on principles of both guilt and fairness and may switch their moral rule depending on the circumstances, according to a Radboud University - Dartmouth College study on moral decision-making and cooperation. The findings challenge prior research in economics, psychology and neuroscience, which is often based on the premise that people are motivated by one moral principle, which remains constant over time. The study was published recently in Nature Communications.

"Our study demonstrates that with moral behavior, people may not in fact always stick to the golden rule. While most people tend to exhibit some concern for others, others may demonstrate what we have called 'moral opportunism,' where they still want to look moral but want to maximize their own benefit," said lead author Jeroen van Baar, a postdoctoral research associate in the department of cognitive, linguistic and psychological sciences at Brown University, who started this research when he was a scholar at Dartmouth visiting from the Donders Institute for Brain, Cognition and Behavior at Radboud University.

"In everyday life, we may not notice that our morals are context-dependent since our contexts tend to stay the same daily. However, under new circumstances, we may find that the moral rules we thought we'd always follow are actually quite malleable," explained co-author Luke J. Chang, an assistant professor of psychological and brain sciences and director of the Computational Social Affective Neuroscience Laboratory (Cosan Lab) at Dartmouth. "This has tremendous ramifications if one considers how our moral behavior could change under new contexts, such as during war," he added.

The info is here.

The research is here.

Tuesday, January 29, 2019

Even arbitrary norms influence moral decision-making

Campbell Pryor, Amy Perfors & Piers D. L. Howe
Nature Human Behaviour (2018)


It is well known that individuals tend to copy behaviours that are common among other people—a phenomenon known as the descriptive norm effect. This effect has been successfully used to encourage a range of real-world prosocial decisions, such as increasing organ donor registrations. However, it is still unclear why it occurs. Here, we show that people conform to social norms, even when they understand that the norms in question are arbitrary and do not reflect the actual preferences of other people. These results hold across multiple contexts and when controlling for confounds such as anchoring or mere-exposure effects. Moreover, we demonstrate that the degree to which participants conform to an arbitrary norm is determined by the degree to which they self-identify with the group that exhibits the norm. Two prominent explanations of norm adherence—the informational and social sanction accounts—cannot explain these results, suggesting that these theories need to be supplemented by an additional mechanism that takes into account self-identity.

The info is here.

Sunday, January 27, 2019

Expectations Bias Moral Evaluations

Derek Powell and Zachary Horne
PsyArXiv Preprints
Originally created on December 23, 2018


People’s expectations play an important role in their reactions to events. There is often disappointment when events fail to meet expectations and a special thrill to having one’s expectations exceeded. We propose that expectations influence evaluations through information-theoretic principles: less expected events do more to inform us about the state of the world than do more expected events. An implication of this proposal is that people may have inappropriately muted responses to morally significant but expected events. In two preregistered experiments, we found that people’s judgments of morally-significant events were affected by the likelihood of that event. People were more upset about events that were unexpected (e.g., a robbery at a clothing store) than events that were more expected (e.g., a robbery at a convenience store). We argue that this bias has pernicious moral consequences, including leading to reduced concern for victims in most need of help.

The preprint is here.

Monday, July 23, 2018

Assessing the contextual stability of moral foundations: Evidence from a survey experiment

David Ciuk
Research and Politics
First Published June 20, 2018


Moral foundations theory (MFT) claims that individuals use their intuitions on five “virtues” as guidelines for moral judgment, and recent research makes the case that these intuitions cause people to adopt important political attitudes, including partisanship and ideology. New work in political science, however, demonstrates not only that the causal effect of moral foundations on these political predispositions is weaker than once thought, but it also opens the door to the possibility that causality runs in the opposite direction—from political predispositions to moral foundations. In this manuscript, I build on this new work and test the extent to which partisan and ideological considerations cause individuals’ moral foundations to shift in predictable ways. The results show that while these group-based cues do exert some influence on moral foundations, the effects of outgroup cues are particularly strong. I conclude that small shifts in political context do cause MFT measures to move, and, to close, I discuss the need for continued theoretical development in MFT as well as an increased attention to measurement.

The research is here.

Sunday, December 17, 2017

Punish the Perpetrator or Compensate the Victim?

Yingjie Liu, Lin Li, Li Zheng, and Xiuyan Guo
Front. Psychol., 28 November 2017


Third-party punishment and third-party compensation are primary responses to observed norms violations. Previous studies mostly investigated these behaviors in gain rather than loss context, and few study made direct comparison between these two behaviors. We conducted three experiments to investigate third-party punishment and third-party compensation in the gain and loss context. Participants observed two persons playing Dictator Game to share an amount of gain or loss, and the proposer would propose unfair distribution sometimes. In Study 1A, participants should decide whether they wanted to punish proposer. In Study 1B, participants decided to compensate the recipient or to do nothing. This two experiments explored how gain and loss contexts might affect the willingness to altruistically punish a perpetrator, or to compensate a victim of unfairness. Results suggested that both third-party punishment and compensation were stronger in the loss context. Study 2 directly compare third-party punishment and third-party compensation in the both contexts, by allowing participants choosing between punishment, compensation and keeping. Participants chose compensation more often than punishment in the loss context, and chose more punishments in the gain context. Empathic concern partly explained between-context differences of altruistic compensation and punishment. Our findings provide insights on modulating effect of context on third-party altruistic decisions.

The research is here.

Wednesday, June 29, 2016

The Meaning(s) of Situationism

Michelle Ciurria
Teaching Ethics 15:1 (Spring 2015)
DOI: 10.5840/tej201411310


This paper is about the meaning(s) of situationism. Philosophers have drawn various conclusions about situationism, some more favourable than others. Moreover, there is a difference between public reception of situationism, which has been very enthusiastic, and scholarly reception, which has been more cynical. In this paper, I outline what I take to be four key implications of situationism, based on careful scrutiny of the literature. Some situationist accounts, it turns out, are inconsistent with others, or incongruous with the logic of situationist psychology. If we are to teach students about situationism, we must first strive for relative consensus amongst experts, and then disseminate the results to philosophical educators in various fields.

The article is here.

Tuesday, June 28, 2016

Moral enhancements 2

By Michelle Ciurria
Moral Responsibility Blog
Originally published June 4, 2016

Here is an excerpt:

Here, I want to consider whether intended moral enhancements – those intended to induce pro-moral effects – can, somewhat paradoxically, undermine responsibility. I say ‘intended’ because, as we saw, moral interventions can have unintended (even counter-moral) consequences. This can happen for any number of reasons: the intervener can be wrong about what morality requires (imagine a Nazi intervener thinking that anti-Semitism is a pro-moral trait); the intervention can malfunction over time; the intervention can produce traits that are moral in one context but counter-moral in another (which seems likely, given that traits are highly context-sensitive, as I mentioned earlier); and so on – I won’t give a complete list. Even extant psychoactive drugs – which can count as a type of passive intervention – typically come with adverse side-effects; but the risk of unintended side-effects for futuristic interventions of a moral nature is substantially greater and more worrisome, because the technology is new, it operates on complicated cognitive structures, and it specifically operates on those structures constitutive of a person’s moral personality. Since intended moral interventions do not always produce their intended effects (pro-moral effects), I’ll discuss these interventions under two guises: interventions that go as planned and induce pro-moral traits (effective cases), and interventions that go awry (ineffective cases). I’ll also focus on the most controversial case of passive intervention: involuntary intervention, without informed consent.

The blog post is here.

Monday, June 27, 2016

Moral enhancements & moral responsibility

By Michelle Ciurria
Moral Responsibility Blog
Originally published May 25, 2016

Here is an excerpt:

What are our duties with respect to moral enhancements? We can approach this question from two directions: our individual duty to use or submit to moral interventions, and our duty to provide or administer them to people with moral deficits. This might seem to suggest a distinction between self-regarding duties and other-regarding duties, but this is a false dichotomy because the duty to enhance oneself is partly a duty to others – a duty to equip oneself to respect other people’s rights and interests. So both duties have an other-regarding dimension. The distinction I’m talking about is between duties to enhance oneself, and duties to enhance other other people: self-directed duties and other-directed duties.

These two duties also cannot be neatly demarcated because we might need to weigh self-directed duties against other-directed duties to achieve a proper balance. That is, given finite time and resources, my duty to enhance myself in some way might be outweighed by my duty to foster the capabilities of another person. So we need to work out a proper balance, and different normative frameworks will provide different answers. All frameworks, however, seem to support these two kinds of duties, though they balance them differently.

The article is here.

Friday, March 4, 2016

Reconceptualizing Autonomy: A Relational Turn in Bioethics

Bruce Jennings
The Hastings Center Report
Article first published online: 5 FEB 2016
DOI: 10.1002/hast.544

History's judgment on the success of bioethics will not depend solely on the conceptual creativity and innovation in the field at the level of ethical and political theory, but this intellectual work is not insignificant. One important new development is what I shall refer to as the relational turn in bioethics. This development represents a renewed emphasis on the ideographic approach, which interprets the meaning of right and wrong in human actions as they are inscribed in social and cultural practices and in structures of lived meaning and interdependence; in an ideographic approach, the task of bioethics is to bring practice into theory, not the other way around.

The relational turn in bioethics may profoundly affect the critical questions that the field asks and the ethical guidance it offers society, politics, and policy. The relational turn provides a way of correcting the excessive atomism of many individualistic perspectives that have been, and continue to be, influential in bioethics. Nonetheless, I would argue that most of the work reflecting the relational turn remains distinctively liberal in its respect for the ethical significance of the human individual. It moves away from individualism, but not from the value of individuality.In this review essay, I shall focus on how the relational turn has manifested itself in work on core concepts in bioethics, especially liberty and autonomy. Following a general review, I conclude with a brief consideration of two important recent books in this area: Jennifer Nedelsky's Law's Relations and Rachel Haliburton's Autonomy and the Situated Self.

The article is here.

Thursday, November 12, 2015

The Ethics of Killing Baby Hitler

By Matt Ford
The Atlantic
Originally published October 24, 2015

Here is an excerpt:

The strongest argument for removing Hitler from history is the Holocaust, since it can be directly tied to his existence. The exact mechanisms of the Holocaust—the Nuremburg laws, Kristallnacht, the death squads, the gas chambers, the forced marches, and more—are unquestionably the products of Hitler and his disciples, and they likely would not have existed without him. All other things being equal, a choice between Hitler and the Holocaust is an easy one.

But focusing on Hitler’s direct responsibility for the Holocaust blinds us to more disturbing truths about the early 20th century. His absence from history would not remove the underlying political ideologies or social movements that fueled his ascendancy. Before his rise to power, eugenic theories already held sway in Western countries. Anti-Semitism infected civic discourse and state policy, even in the United States. Concepts like ethnic hierarchies and racial supremacy influenced mainstream political thought in Germany and throughout the West. Focusing on Hitler’s central role in the Holocaust also risks ignoring the thousands of participants who helped carry it out, both within Germany and throughout occupied Europe, and on the social and political forces that preceded it. It’s not impossible that in a climate of economic depression and scientific racism, another German leader could also move towards a similar genocidal end, even if he deviated from Hitler’s exact worldview or methods.

The entire article is here.

Friday, October 2, 2015

What Is Quantum Cognition, and How Is It Applied to Psychology?

By Jerome Busemeyer and Zheng Wang
Current Directions in Psychological Science 
June 2015 vol. 24 no. 3 163-169


Quantum cognition is a new research program that uses mathematical principles from quantum theory as a framework to explain human cognition, including judgment and decision making, concepts, reasoning, memory, and perception. This research is not concerned with whether the brain is a quantum computer. Instead, it uses quantum theory as a fresh conceptual framework and a coherent set of formal tools for explaining puzzling empirical findings in psychology. In this introduction, we focus on two quantum principles as examples to show why quantum cognition is an appealing new theoretical direction for psychology: complementarity, which suggests that some psychological measures have to be made sequentially and that the context generated by the first measure can influence responses to the next one, producing measurement order effects, and superposition, which suggests that some psychological states cannot be defined with respect to definite values but, instead, that all possible values within the superposition have some potential for being expressed. We present evidence showing how these two principles work together to provide a coherent explanation for many divergent and puzzling phenomena in psychology.

The entire article is here.

Tuesday, June 16, 2015

Affective basis of judgment-behavior discrepancy in virtual experiences of moral dilemmas

I. Patil, C. Cogoni, N. Zangrando, L. Chittaro, and G. Silani
Social Neuroscience, 2014
Vol. 9, No. 1, 94-107


Although research in moral psychology in the last decade has relied heavily on hypothetical moral dilemmas and has been effective in understanding moral judgment, how these judgments translate into behaviors remains a largely unexplored issue due to the harmful nature of the acts involved. To study this link, we follow a new approach based on a desktop virtual reality environment. In our within-subjects experiment, participants exhibited an order-dependent judgment-behavior discrepancy across temporally separated sessions, with many of them behaving in utilitarian manner in virtual reality dilemmas despite their nonutilitarian judgments for the same dilemmas in textual descriptions. This change in decisions reflected in the autonomic arousal of participants, with dilemmas in virtual reality being perceived more emotionally arousing than the ones in text, after controlling for general differences between the two presentation modalities (virtual reality vs. text). This suggests that moral decision-making in hypothetical moral dilemmas is susceptible to contextual saliency of the presentation of these dilemmas.

The entire article is here.

Tuesday, December 9, 2014

What we say and what we do: The relationship between real and hypothetical moral choices

By Oriel FeldmanHall, Dean Mobbs, Davy Evans, Lucy Hiscox, Lauren Navrady, & Tim Dalgleish
Cognition. Jun 2012; 123(3): 434–441.
doi:  10.1016/j.cognition.2012.02.001


Moral ideals are strongly ingrained within society and individuals alike, but actual moral choices are profoundly influenced by tangible rewards and consequences. Across two studies we show that real moral decisions can dramatically contradict moral choices made in hypothetical scenarios (Study 1). However, by systematically enhancing the contextual information available to subjects when addressing a hypothetical moral problem—thereby reducing the opportunity for mental simulation—we were able to incrementally bring subjects’ responses in line with their moral behaviour in real situations (Study 2). These results imply that previous work relying mainly on decontextualized hypothetical scenarios may not accurately reflect moral decisions in everyday life. The findings also shed light on contextual factors that can alter how moral decisions are made, such as the salience of a personal gain.


    We show people are unable to appropriately judge outcomes of moral behaviour. 

  • Moral beliefs have weaker impact when there is a presence of significant self-gain. 
  • People make highly self-serving choices in real moral situations. 
  • Real moral choices contradict responses to simple hypothetical moral probes. 
  • Enhancing context can cause hypothetical decisions to mirror real moral decisions.