Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Situationism. Show all posts
Showing posts with label Situationism. Show all posts

Tuesday, August 3, 2021

Get lucky: Situationism and circumstantial moral luck

Marcela Herdova & Stephen Kearns 
(2015) Philosophical Explorations, 18:3, 362-377
DOI: 10.1080/13869795.2015.1026923

Abstract

Situationism is, roughly, the thesis that normatively irrelevant environmental factors have a great impact on our behaviour without our being aware of this influence. Surprisingly, there has been little work done on the connection between situationism and moral luck. Given that it is often a matter of luck what situations we find ourselves in, and that we are greatly influenced by the circumstances we face, it seems also to be a matter of luck whether we are blameworthy or praiseworthy for our actions in those circumstances. We argue that such situationist moral luck, as a variety of circumstantial moral luck, exemplifies a distinct and interesting type of moral luck. Further, there is a case to be made that situationist moral luck is perhaps more worrying than some other well-discussed cases of (supposed) moral luck.

From the Conclusion

Those who insist on the significance of luck to our practices of moral assessment are on somewhat of a tightrope. If we consider agents who differ only in the external results of their actions, and who are faced with normatively similar circumstances, it is difficult to maintain that there is any major difference in the degree of such agents’ moral responsibility. If we consider agents that differ rather significantly, and face normatively distinct situations, then though luck may play a role in what normative circumstances they face, there is much to base a moral assessment on that is either under the agents’ control or distinctive of each agent and their respective responses to their normative circumstances (or both). The role luck plays in our assessments of such agents, then, is arguably small enough that it is unclear that any difference in moral assessment can be properly said to be due  to this luck (at least to an extent that should worry us or that is inconsiderable tension with our usual moral thinking).

Wednesday, July 11, 2018

The Lifespan of a Lie

Ben Blum
Medium.com
Originally posted June 7, 2018

Here is an excerpt:

Somehow, neither Prescott’s letter nor the failed replication nor the numerous academic critiques have so far lessened the grip of Zimbardo’s tale on the public imagination. The appeal of the Stanford prison experiment seems to go deeper than its scientific validity, perhaps because it tells us a story about ourselves that we desperately want to believe: that we, as individuals, cannot really be held accountable for the sometimes reprehensible things we do. As troubling as it might seem to accept Zimbardo’s fallen vision of human nature, it is also profoundly liberating. It means we’re off the hook. Our actions are determined by circumstance. Our fallibility is situational. Just as the Gospel promised to absolve us of our sins if we would only believe, the SPE offered a form of redemption tailor-made for a scientific era, and we embraced it.

For psychology professors, the Stanford prison experiment is a reliable crowd-pleaser, typically presented with lots of vividly disturbing video footage. In introductory psychology lecture halls, often filled with students from other majors, the counterintuitive assertion that students’ own belief in their inherent goodness is flatly wrong offers dramatic proof of psychology’s ability to teach them new and surprising things about themselves. Some intro psych professors I spoke to felt that it helped instill the understanding that those who do bad things are not necessarily bad people. Others pointed to the importance of teaching students in our unusually individualistic culture that their actions are profoundly influenced by external factors.

(cut)

But if Zimbardo’s work was so profoundly unscientific, how can we trust the stories it claims to tell? Many other studies, such as Soloman Asch’s famous experiment demonstrating that people will ignore the evidence of their own eyes in conforming to group judgments about line lengths, illustrate the profound effect our environments can have on us. The far more methodologically sound — but still controversial — Milgram experiment demonstrates how prone we are to obedience in certain settings. What is unique, and uniquely compelling, about Zimbardo’s narrative of the Stanford prison experiment is its suggestion that all it takes to make us enthusiastic sadists is a jumpsuit, a billy club, and the green light to dominate our fellow human beings.

The article is here.

Thursday, December 28, 2017

‘Politicians want us to be fearful. They’re manipulating us for their own interest'

Decca Aitkenhead
The Guardian
Originally published December 8, 2017

Here are two excerpts:

“Yes, I hate to say it, but yes. Democracy is an advance past the tribal nature of our being, the tribal nature of society, which was there for hundreds of thousands, if not millions, of years. It’s very easy for us to fall back into our tribal, evolutionary nature – tribe against tribe, us against them. It’s a very powerful motivator.” Because it speaks to our most primitive self? “Yes, and we don’t realise how powerful it is.” Until we have understood its power, Bargh argues, we have no hope of overcoming it. “So that’s what we have to do.” As he writes: “Refusing to believe the evidence, just to maintain one’s belief in free will, actually reduces the amount of free will that person has.”

(cut)

Participants were asked to fill out an anonymous questionnaire devised to reveal their willingness to use power over a woman to extract sexual favours if guaranteed to get away with it. Some were asked to rate a female participant’s attractiveness. Others were first primed by a word-association technique, using words such as “boss”, “authority”, “status” and “power”, and then asked to rate her. Bargh found the power-priming made no difference whatsoever to men who had scored low on sexual harassment and aggression tendencies. Among men who had scored highly, however, it was a very different case. Without the notion of power being activated in their brains, they found her unattractive. She only became attractive to them once the idea of power was active in their minds.

This, Bargh suggests, might explain how sexual harassers can genuinely tell themselves: “‘I’m behaving like anybody does when they’re attracted to somebody else. I’m flirting. I’m asking her out. I want to date her. I’m doing everything that you do if you’re attracted to somebody.’ What they don’t realise is the reason they’re attracted to her is because of their power over her. That’s what they don’t get.”

The article is here.

Monday, May 8, 2017

Improving Ethical Culture by Measuring Stakeholder Trust

Phillip Nichols and Patricia Dowden
Compliance and Ethics Blog
Originally posted April 10, 2017

Here is an excerpt:

People who study how individuals behave in organizations find that norms are far more powerful than formal rules, even formal rules that are backed up by legal sanctions.[ii] Thus, a norm that guides people to not steal is going to be more effective than a formal rule that prohibits stealing. Therein lies the benefit to a business firm. A strong ethical culture will be far more effective than formal rules (although of course there is still a need for formal rules).

When the “ethical culture” component of a business firm’s overall culture is strong – when norms and other things guide people in that firm to make sound ethical and social decisions – the firm benefits in two ways: it enhances the positive and controls the negative. In terms of enhancing the positive,  a strong ethical culture increases the amount of loyalty and commitment that people associated with a business firm have towards that firm. A strong ethical culture also contributes to higher levels of job satisfaction. People who are loyal and committed to a business firm are more likely to make “sacrifices” for that firm, meaning they are more likely to do things like working late or on weekends in order to get a project done, or help another department when that department needs extra help. People who are loyal and committed to a firm are more likely to defend that firm against accusers, and to stand by the firm in times of crisis. Workers who have high levels of job satisfaction are more likely to stay with a firm, and are more likely to refer customers to that firm and to recruit others to work for that firm.

The blog post is here.

Thursday, April 6, 2017

Would You Deliver an Electric Shock in 2015?

Dariusz DoliƄski, Tomasz Grzyb, Tomasz Grzyb and others
Social Psychological and Personality Science
First Published January 1, 2017

Abstract

In spite of the over 50 years which have passed since the original experiments conducted by Stanley Milgram on obedience, these experiments are still considered a turning point in our thinking about the role of the situation in human behavior. While ethical considerations prevent a full replication of the experiments from being prepared, a certain picture of the level of obedience of participants can be drawn using the procedure proposed by Burger. In our experiment, we have expanded it by controlling for the sex of participants and of the learner. The results achieved show a level of participants’ obedience toward instructions similarly high to that of the original Milgram studies. Results regarding the influence of the sex of participants and of the “learner,” as well as of personality characteristics, do not allow us to unequivocally accept or reject the hypotheses offered.

The article is here.

“After 50 years, it appears nothing has changed,” said social psychologist Tomasz Grzyb, an author of the new study, which appeared this week in the journal Social Psychological and Personality Science.

A Los Angeles Times article summaries the study here.

Thursday, September 22, 2016

Does Situationism Threaten Free Will and Moral Responsibility?

Michael McKenna and Brandon Warmke
Journal of Moral Psychology

Abstract

The situationist movement in social psychology has caused a considerable stir in philosophy over the last fifteen years. Much of this was prompted by the work of the philosophers Gilbert Harman (1999) and John Doris (2002). Both contended that familiar philosophical assumptions about the role of character in the explanation of human action were not supported by the situationists experimental results. Most of the ensuing philosophical controversy has focused upon issues related to moral psychology and ethical theory, especially virtue ethics. More recently, the influence of situationism has also given rise to further questions regarding free will and moral responsibility (e.g., Brink 2013; Ciurria 2013; Doris 2002; Mele and Shepherd 2013; Miller 2016; Nelkin 2005; Talbert 2009; and Vargas 2013b). In this paper, we focus just upon these latter issues. Moreover, we focus primarily on reasons-responsive theories. There is cause for concern that a range of situationist findings are in tension with the sort of reasons-responsiveness putatively required for free will and moral responsibility. Here, we develop and defend a response to the alleged situationist threat to free will and moral responsibility that we call pessimistic realism. We conclude on an optimistic note, however, exploring the possibility of strengthening our agency in the face of situational influences.

The article is here.

Wednesday, June 29, 2016

The Meaning(s) of Situationism

Michelle Ciurria
Teaching Ethics 15:1 (Spring 2015)
DOI: 10.5840/tej201411310

Abstract

This paper is about the meaning(s) of situationism. Philosophers have drawn various conclusions about situationism, some more favourable than others. Moreover, there is a difference between public reception of situationism, which has been very enthusiastic, and scholarly reception, which has been more cynical. In this paper, I outline what I take to be four key implications of situationism, based on careful scrutiny of the literature. Some situationist accounts, it turns out, are inconsistent with others, or incongruous with the logic of situationist psychology. If we are to teach students about situationism, we must ïŹrst strive for relative consensus amongst experts, and then disseminate the results to philosophical educators in various ïŹelds.

The article is here.

Sunday, November 29, 2015

You’re not as virtuous as you think

By Nitin Nohria
The Washington Post
Originally published October 15, 2015

Moral overconfidence is on display in politics, in business, in sports — really, in all aspects of life. There are political candidates who say they won’t use attack ads until, late in the race, they’re behind in the polls and under pressure from donors and advisers, their ads become increasingly negative. There are chief executives who come in promising to build a business for the long-term but then condone questionable accounting gimmickry to satisfy short-term market demands. There are baseball players who shun the use of steroids until they age past their peak performance and start to look for something to slow the decline. These people may be condemned as hypocrites. But they aren’t necessarily bad actors. Often, they’ve overestimated their inherent morality and underestimated the influence of situational factors.

Moral overconfidence is in line with what studies find to be our generally inflated view of ourselves. We rate ourselves as above-average drivers, investors and employees, even though math dictates that can’t be true for all of us. We also tend to believe we are less likely than the typical person to exhibit negative qualities and to experience negative life events: to get divorced, become depressed or have a heart attack.

The entire article is here.

Thursday, August 27, 2015

The Psychology of Whistleblowing

James Dungan, Adam Waytz, Liane Young
Current Opinion in Psychology
doi:10.1016/j.copsyc.2015.07.005

Abstract

Whistleblowing—reporting another person's unethical behavior to a third party—represents an ethical quandary. In some cases whistleblowing appears heroic whereas in other cases it appears reprehensible. This article describes how the decision to blow the whistle rests on the tradeoff that people make between fairness and loyalty. When fairness increases in value, whistleblowing is more likely whereas when loyalty increases in value, whistleblowing is less likely. Furthermore, we describe systematic personal, situational, and cultural factors stemming from the fairness-loyalty tradeoff that drive whistleblowing. Finally, we describe how minimizing this tradeoff and prioritizing constructive dissent can encourage whistleblowing and strengthen collectives.

The entire article is here.

Wednesday, July 1, 2015

The new neuroscience of genocide and mass murder

By Paul Rosenberg
Salon.org
Originally posted June 13, 2015

Here is an excerpt:

“Almost 20 years later I’m revisiting this issue in Paris,” Fried told Salon, saying several things motivated him, beginning with advances in neuroscience. “In neuroscience we’re moving more and more towards affective and social neuroscience; we are trying to address more complex social and psychological situations,” Fried said. “There has been some accumulation of knowledge in areas such as dehumanized perception, areas like theories of mind, the ability of other human beings to have a theory of mind of what is in another person’s mind—obviously this is completely obliterated in a situation of Syndrome E—and our understanding of neural mechanisms of empathy, a development which occurred over the last 10 years.” He added, “I think people are looking more at neuroscience correlations of interactions between people, so for instance the mirror neurons, the whole idea of mirror neurons, and what happens when you look at somebody else, what happens to your own brain.” He cited institutional developments as well—new organizations and journals supporting social cognitive research—all of which helped make the time ripe for a new look at Syndrome E.

But Fried also pointed to the ability to engage more robustly with criticisms across disciplinary fields. “I saw a renewed interest and ability to raise this question, because after I raised it initially there was really, some people were offended that I was giving a biological explanation to something that for them was just a bunch of scum shooting at innocent people, which it is, to some extent,” he admitted. Now, however, Fried sees a greater willingness to argue things through. “People are more attuned to the question of what is the relationship of neuroscience to the legal system, to the issue of responsibility—what is the definition of the responsible self—our sense of identity, our sense of responsibility. There are a lot of these types of questions which are raised with the development of neuroscience.”

The entire article is here.

Friday, June 26, 2015

Have You Ever Been Wrong?

By Peter Wehner
Commentary
Originally posted June 6, 2015

Here is an excerpt:

“Thus,” Mr. Mehlman writes, “policy positions were not driving partisanship, but rather partisanship was driving policy positions. Voters took whichever position was ascribed to their party, irrespective of the specific policies that position entailed.”

So what explains this? Some of it probably has to do with deference. Many people don’t follow public policy issues very closely — but they do know whose team they’re on. And so if their team endorses a particular policy, they’re strongly inclined to as well. They assume the position merits support based on who (and who does not) supports it.

The flip side of this is mistrust. If you’re a Democrat and you are told about the details of a Republican plan, you might automatically assume it’s a bad one (the same goes for how a Republican would receive a Democratic plan). If a party you despise holds a view on a certain issue, your reflex will be to hold that opposite view.

The entire article is here.

Tuesday, April 14, 2015

Hannah Arendt: thinking versus evil

By Jon Nixon
The Times of Higher Education
Originally posted February 26, 2015

Here are two excerpts:

That is why the notion of “thinking” played such an important part in Arendt’s analysis of totalitarianism, from her 1951 The Origins of Totalitarianism to her highly controversial coverage of the Adolf Eichmann trial, the latter culminating in her 1963 book Eichmann in Jerusalem. In this, she famously employed the phrase “the banality of evil” to describe what she saw as Eichmann’s unquestioning adherence to the norms of the Nazi regime. In concluding from the occasional lies and inconsistencies in his courtroom testimony that Eichmann was a liar, the prosecution had missed the moral and legal challenge of the case: “Their case rested on the assumption that the defendant, like all ‘normal persons’, must have been aware of the criminal nature of his acts” – but, she added, Eichmann was normal only in so far as he was “no exception within the Nazi regime”. The prosecution had, according to Arendt’s analysis, failed to grasp the moral and political significance of Eichmann’s “abnormality”: namely, his adherence to the norms of the regime he had served and therefore his lack of awareness of the criminal nature of his acts.

(cut)

In Arendt’s view, Eichmann’s “banality” left him no less culpable – and rendered the death sentence no less justifiable – but it shifted the basis of the argument against him: if he was a monster, then his monstrosity arose from an all too human propensity towards thoughtlessness. If Heidegger had represented the unworldliness of “pure thought”, then Eichmann represented the unworldliness of “thoughtlessness”. Neither connected with the plurality of the world as Arendt understood it. A world devoid of thinking, willing and judging would, she argued, be a world inhabited by automatons such as Eichmann who lacked freedom of will and any capacity for independent judgement.

The entire article is here.

Tuesday, November 25, 2014

Fabricating and plagiarising: when researchers lie

By Mark Israel
The Conversation
Originally published November 5, 2014

Here is an excerpt:

Systematic research into the causes of scientific misconduct is scarce. However, occasionally committees of investigation and research organisations have offered some comment. Some see the researcher as a “bad apple”. A researcher’s own ambition, vanity, desire for recognition and fame, and the prospect for personal gain may lead to behaviour that crosses the limits of what is admissible. Others point to the culture that may prevail in certain disciplines or research groups (“bad barrel”).

Again others identify the creation of a research environment overwhelmed by corrupting pressures (“bad barrel maker”). Many academics are under increasing pressure to publish – and to do so in English irrespective of their competence in that language – as their nation or institution seeks to establish or defend its placing in international research rankings.

The entire article is here.

Sunday, October 19, 2014

Accountability for Research Misconduct

By Zubin Master
Health Research, Research Ethics, Science Funding
Originally posted September 23, 2014

Here is an excerpt:

This case raises important questions about the responsibilities of research institutions to promote research integrity and to prevent research misconduct. Philip Zimbardo’s Stanford prison experiments and other social psychology research have taught us that ethical behavior is not only shaped by dispositional attribution (an internal moral character), but also by many situational (environmental) features. Similarly, our understanding of the cause of research misconduct is shifting away from the idea that this is just a problem of a few “bad apples” to a broader understanding of how the immense pressure to both publish and translate research findings into products, as well as poor institutional supports influence research misconduct.

This is not to excuse misbehaviour by researchers, but rather to shed light on the fact that institutions also bear moral responsibility for research misconduct. Thus far, institutions have taken few measures to promote research integrity and prevent research misconduct. Indeed, in many high profile cases of research misconduct, they remain virtually blameless.

The entire article is here.

Tuesday, February 4, 2014

Responsibility and Blame in the Clinic

By Hanna Pickard
Flickers of Freedom
Originally posted January 17, 2014

Here is an excerpt:

But we can really help these patients if we adopt a stance that I call “Responsibility without Blame”. Here’s what this means. The problem behaviour is voluntary. Patients with PD are not mentally ill and they know as well as most of us do what they are doing when they act. They have choice and control over their behaviour at least in the minimal sense that they can refrain – which they will often do if sufficiently motivated.  That does not mean that refraining is easy.  Here a little more background is important: PD is associated with extreme early psycho-socio-economic adversity. Most patients come from dysfunctional families or they may have been in institutional care. Rates of childhood sexual, emotional, and physical abuse or neglect are very high. Socio-economic status is low. Additional associated factors include war, migration, and poverty. Problem behaviour is often a learned, habitual way of coping with the distress caused by such adversity, and patients may have hitherto lacked decent opportunities to learn alternative, better ways of coping. So, until the underlying distress is addressed and new ways of coping are learned, restraint is hard.

The entire blog post is here.

Sunday, February 2, 2014

The Distinction Between Antisociality And Mental Illness

By Abigail Marsh
Edge.org
Originally published January 15, 2014

Here is an excerpt:

Cognitive biases include widespread tendencies to view actions that cause harm to others as fundamentally more intentional and blameworthy than identical actions that happen not to result in harm to others, as has been shown by Joshua Knobe and others in investigations of the "side-effect effect", and to view agents who cause harm as fundamentally more capable of intentional and goal-directed behavior than those who incur harm, as has been shown by Kurt Gray and others in investigations of distinction between moral agents and moral patients. These biases dictate that an individual who is predisposed to behavior that harms others as a result of genetic and environmental risk factors will be inherently viewed as more responsible for his or her behaviors than another individual predisposed to behavior that harms himself as a result of similar genetic and environmental risk factors. The tendency to view those who harm others as responsible for their actions, and thus blameworthy, may reflect seemingly evolved tendencies to reinforce social norms by blaming and punishing wrongdoers for their misbehavior.

The entire blog post is here.

Monday, November 18, 2013

Psyching Us Out: The Promises of ‘Priming’

By GARY GUTTING
The New York Times - Opinionator
Originally published October 31, 2013

Reports of psychological experiments are journalistic favorites.  This is especially true of experiments revealing the often surprising effects of “priming” on human behavior. Priming occurs when a seemingly trivial alteration in an experimental situation produces major changes in the behavior of the subjects.

The classic priming experiment was one in which college students had been asked to form various sentences from a given set of words.  Those in one group were given words that included several associated with older people (like bingo, gray and Florida).  Those in a second group were given words with no such associations.  After the linguistic exercise, each participant was instructed to leave the building by walking down a hallway.  Without letting the participants know what was going on, the experimenters timed their walks down the hall.  They found that those in the group given words associated with old people walked significantly slower than those in the other group.  The first group had been primed to walk more slowly.

The entire story is here.

Sunday, November 10, 2013

Decomposing the Will - Book Review

Andy Clark, Julian Kiverstein, and Tillmann Vierkant (eds.), Decomposing the Will, Oxford University Press, 2013, 356pp., $74.00 (hbk), ISBN 9780199746996.

Reviewed by Marcela Herdova, King's College London

Decomposing the Will is a collection of 17 papers that examine recent developments in cognitive sciences in relation to claims about conscious agency (or lack thereof) and the implications of these findings for the free will debate. The overarching theme of the volume is exploring conscious will as "decomposed" into interrelated functions. The volume has four sections. Part 1 surveys scientific research that has been taken by many to support what the editors refer to as "the zombie challenge". The zombie challenge stems from claims about the limited role of consciousness in ordinary behavior. If conscious control is required for free will, this recent scientific research, which challenges conscious efficacy, also undermines free will. In part 2, authors explore various layers of the sense of agency. Part 3 investigates how to use both phenomenology and science to address the zombie challenge and discusses a variety of possible functions for conscious control. Part 4 offers decomposed accounts of the will.

Due to limitations of space, I will offer extended discussion of only a handful of papers. I provide a brief description for the remaining papers.

The entire book review is here.