Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Action. Show all posts
Showing posts with label Action. Show all posts

Saturday, February 17, 2024

What Stops People From Standing Up for What’s Right?

Julie Sasse
Greater Good
Originally published 17 Jan 24

Here is an excerpt:

How can we foster moral courage?

Every person can try to become more morally courageous. However, it does not have to be a solitary effort. Instead, institutions such as schools, companies, or social media platforms play a significant role. So, what are concrete recommendations to foster moral courage?
  • Establish and strengthen social and moral norms: With a solid understanding of what we consider right and wrong, it becomes easier to detect wrongdoings. Institutions can facilitate this process by identifying and modeling fundamental values. For example, norms and values expressed by teachers can be important points of reference for children and young adults.
  • Overcome uncertainty: If it is unclear whether someone’s behavior is wrong, witnesses should feel comfortable to inquire, for example, by asking other bystanders how they judge the situation or a potential victim whether they are all right.
  • Contextualize anger: In the face of wrongdoings, anger should not be suppressed since it can provide motivational fuel for intervention. Conversely, if someone expresses anger, it should not be diminished as irrational but considered a response to something unjust. 
  • Provide and advertise reporting systems: By providing reporting systems, institutions relieve witnesses from the burden of selecting and evaluating individual means of intervention and reduce the need for direct confrontation.
  • Show social support: If witnesses directly confront a perpetrator, others should be motivated to support them to reduce risks.
We see that there are several ways to make moral courage less difficult, but they do require effort from individuals and institutions. Why is that effort worth it? Because if more individuals are willing and able to show moral courage, more wrongdoings would be addressed and rectified—and that could help us to become a more responsible and just society.


Main points:
  • Moral courage is the willingness to stand up for what's right despite potential risks.
  • It's rare because of various factors like complexity of the internal process, situational barriers, and difficulty seeing the long-term benefits.
  • Key stages involve noticing a wrongdoing, interpreting it as wrong, feeling responsible, believing in your ability to intervene, and accepting potential risks.
  • Personality traits and situational factors influence these stages.

Thursday, November 23, 2023

How to Maintain Hope in an Age of Catastrophe

Masha Gessen
The Atlantic
Originally posted 12 Nov 23

Gessen interviews psychoanalyst and author Robert Jay Lifton.  Here is an excerpt from the beginning of the article/interview:

Lifton is fascinated by the range and plasticity of the human mind, its ability to contort to the demands of totalitarian control, to find justification for the unimaginable—the Holocaust, war crimes, the atomic bomb—and yet recover, and reconjure hope. In a century when humanity discovered its capacity for mass destruction, Lifton studied the psychology of both the victims and the perpetrators of horror. “We are all survivors of Hiroshima, and, in our imaginations, of future nuclear holocaust,” he wrote at the end of “Death in Life.” How do we live with such knowledge? When does it lead to more atrocities and when does it result in what Lifton called, in a later book, “species-wide agreement”?

Lifton’s big books, though based on rigorous research, were written for popular audiences. He writes, essentially, by lecturing into a Dictaphone, giving even his most ambitious works a distinctive spoken quality. In between his five large studies, Lifton published academic books, papers and essays, and two books of cartoons, “Birds” and “PsychoBirds.” (Every cartoon features two bird heads with dialogue bubbles, such as, “ ‘All of a sudden I had this wonderful feeling: I am me!’ ” “You were wrong.”) Lifton’s impact on the study and treatment of trauma is unparalleled. In a 2020 tribute to Lifton in the Journal of the American Psychoanalytic Association, his former colleague Charles Strozier wrote that a chapter in “Death in Life” on the psychology of survivors “has never been surpassed, only repeated many times and frequently diluted in its power. All those working with survivors of trauma, personal or sociohistorical, must immerse themselves in his work.”


Here is my summary of the article and helpful tips.  Happy (hopeful) Thanksgiving!!

Hope is not blind optimism or wishful thinking, but rather a conscious decision to act in the face of uncertainty and to believe in the possibility of a better future. The article/interview identifies several key strategies for cultivating hope, including:
  • Nurturing a sense of purpose: Having a clear sense of purpose can provide direction and motivation, even in the darkest of times. This purpose can be rooted in personal goals, relationships, or a commitment to a larger cause.
  • Engaging in meaningful action: Taking concrete steps, no matter how small, can help to combat feelings of helplessness and despair. Action can range from individual acts of kindness to participation in collective efforts for social change.
  • Cultivating a sense of community: Connecting with others who share our concerns can provide a sense of belonging and support. Shared experiences and collective action can amplify our efforts and strengthen our resolve.
  • Maintaining a critical perspective: While it is important to hold onto hope, it is also crucial to avoid complacency or denial. We need to recognize the severity of the challenges we face and to remain vigilant in our efforts to address them.
  • Embracing resilience: Hope is not about denying hardship or expecting a quick and easy resolution to our problems. Rather, it is about cultivating the resilience to persevere through difficult times and to believe in the possibility of positive change.

The article concludes by emphasizing the importance of hope as a driving force for positive change. Hope is not a luxury, but a necessity for survival and for building a better future. By nurturing hope, we can empower ourselves and others to confront the challenges we face and to work towards a more just and equitable world.

Saturday, August 19, 2023

Reverse-engineering the self

Paul, L., Ullman, T. D., De Freitas, J., & Tenenbaum, J.
(2023, July 8). PsyArXiv
https://doi.org/10.31234/osf.io/vzwrn

Abstract

To think for yourself, you need to be able to solve new and unexpected problems. This requires you to identify the space of possible environments you could be in, locate yourself in the relevant one, and frame the new problem as it exists relative to your location in this new environment. Combining thought experiments with a series of self-orientation games, we explore the way that intelligent human agents perform this computational feat by “centering” themselves: orienting themselves perceptually and cognitively in an environment, while simultaneously holding a representation of themselves as an agent in that environment. When faced with an unexpected problem, human agents can shift their perceptual and cognitive center from one location in a space to another, or “re-center”, in order to reframe a problem, giving them a distinctive type of cognitive flexibility. We define the computational ability to center (and re-center) as “having a self,” and propose that implementing this type of computational ability in machines could be an important step towards building a truly intelligent artificial agent that could “think for itself”. We then develop a conceptually robust, empirically viable, engineering-friendly implementation of our proposal, drawing on well established frameworks in cognition, philosophy, and computer science for thinking, planning, and agency.


The computational structure of the self is a key component of human intelligence. They propose a framework for reverse-engineering the self, drawing on work in cognition, philosophy, and computer science.

The authors argue that the self is a computational agent that is able to learn and think for itself. This agent has a number of key abilities, including:
  • The ability to represent the world and its own actions.
  • The ability to plan and make decisions.
  • The ability to learn from experience.
  • The ability to have a sense of self.
The authors argue that these abilities can be modeled as a POMDP, a type of mathematical model that is used to represent sequential decision-making processes in which the decision-maker does not have complete information about the environment. They propose a number of methods for reverse-engineering the self, including:
  • Using data from brain imaging studies to identify the neural correlates of self-related processes.
  • Using computational models of human decision-making to test hypotheses about the computational structure of the self.
  • Using philosophical analysis to clarify the nature of self-related concepts.
The authors argue that reverse-engineering the self is a promising approach to understanding human intelligence and developing artificial intelligence systems that are capable of thinking for themselves.

Tuesday, February 15, 2022

How do people use ‘killing’, ‘letting die’ and related bioethical concepts? Contrasting descriptive and normative hypotheses

Rodríguez-Arias, D., et al., (2009)
Bioethics 34(5)
DOI:10.1111/bioe.12707

Abstract

Bioethicists involved in end-of-life debates routinely distinguish between ‘killing’ and ‘letting die’. Meanwhile, previous work in cognitive science has revealed that when people characterize behaviour as either actively ‘doing’ or passively ‘allowing’, they do so not purely on descriptive grounds, but also as a function of the behaviour’s perceived morality. In the present report, we extend this line of research by examining how medical students and professionals (N = 184) and laypeople (N = 122) describe physicians’ behaviour in end-of-life scenarios. We show that the distinction between ‘ending’ a patient’s life and ‘allowing’ it to end arises from morally motivated causal selection. That is, when a patient wishes to die, her illness is treated as the cause of death and the doctor is seen as merely allowing her life to end. In contrast, when a patient does not wish to die, the doctor’s behaviour is treated as the cause of death and, consequently, the doctor is described as ending the patient’s life. This effect emerged regardless of whether the doctor’s behaviour was omissive (as in withholding treatment) or commissive (as in applying a lethal injection). In other words, patient consent shapes causal selection in end-of-life situations, and in turn determines whether physicians are seen as ‘killing’ patients, or merely as ‘enabling’ their death.

From the Discussion

Across three  cases of  end-of-life  intervention, we find  convergent evidence  that moral  appraisals shape behavior description (Cushman et al., 2008) and causal selection (Alicke, 1992; Kominsky et al., 2015). Consistent  with  the  deontic  hypothesis,  physicians  who  behaved  according  to  patients’  wishes  were described as allowing the patient’s life to end. In contrast, physicians who disregarded the patient’s wishes were  described  as  ending the  patient’s  life.  Additionally,  patient  consent  appeared  to  inform  causal selection: The doctor was seen as the cause of death when disregarding the patient’s will; but the illness was seen as the cause of death when the doctor had obeyed the patient’s will.

Whether the physician’s behavior was omissive or commissive did not play a comparable role in behavior description or causal  selection. First, these  effects were weaker  than those of patient consent. Second,  while the  effects  of  consent  generalized to  medical  students  and  professionals,  the  effects of commission arose only among lay respondents. In other words, medical students and professionals treated patient consent as the sole basis for the doing/allowing distinction.  

Taken together, these  results confirm that  doing and  allowing serve a  fundamentally evaluative purpose (in  line with  the deontic  hypothesis,  and Cushman  et al.,  2008),  and only  secondarily serve  a descriptive purpose, if at all. 

Tuesday, June 8, 2021

Action and inaction in moral judgments and decisions: ‎Meta-analysis of Omission-Bias omission-commission asymmetries

Jamison, J., Yay, T., & Feldman, G.
Journal of Experimental Social Psychology
Volume 89, July 2020, 103977

Abstract

Omission bias is the preference for harm caused through omissions over harm caused through commissions. In a pre-registered experiment (N = 313), we successfully replicated an experiment from Spranca, Minsk, and Baron (1991), considered a classic demonstration of the omission bias, examining generalizability to a between-subject design with extensions examining causality, intent, and regret. Participants in the harm through commission condition(s) rated harm as more immoral and attributed higher responsibility compared to participants in the harm through omission condition (d = 0.45 to 0.47 and d = 0.40 to 0.53). An omission-commission asymmetry was also found for perceptions of causality and intent, in that commissions were attributed stronger action-outcome links and higher intentionality (d = 0.21 to 0.58). The effect for regret was opposite from the classic findings on the action-effect, with higher regret for inaction over action (d = −0.26 to −0.19). Overall, higher perceived causality and intent were associated with higher attributed immorality and responsibility, and with lower perceived regret.

From the Discussion

Regret: Deviation from the action-effect 

The classic action-effect (Kahneman & Tversky, 1982) findings were that actions leading to a negative outcome are regretted more than inactions leading to the same negative outcomes. We added a regret measure to examine whether the action-effect findings would extend to situations of morality involving intended harmful behavior. Our findings were opposite to the expected action-effect omission-commission asymmetry with participants rating omissions as more regretted than commissions (d = 0.18 to 0.26).  

One explanation for this surprising finding may be an intermingling of the perception of an actors’ regret for their behavior with their regret for the outcome. In typical action-effect scenarios, actors behave in a way that is morally neutral but are faced with an outcome that deviates from expectations, such as losing money over an investment. In this study’s omission bias scenarios, the actors behaved immorally to harm others for personal or interpersonal gain, and then are faced with an outcome that deviates from expectation. We hypothesized that participants would perceive actors as being more regretful for taking action that would immorally harm another person rather than allowing that harm through inaction. Yet it is plausible that participants were focused on the regret that actors would feel for not taking more direct action towards their goal of personal or interpersonal gain.  

Another possible explanation for the regret finding is the side-taking hypothesis (DeScioli, 2016; Descoli & Kurzban, 2013). This states that group members side against a wrongdoer who has performed an action that is perceived morally wrong by also attributing lack of remorse or regret. The negative relationship observed between the positive characteristic of regret and the negative characteristics of immorality, causality, and intentionality is in support of this explanation. Future research may be able to explore the true mechanisms of regret in such scenarios. 

Wednesday, January 29, 2020

Why morals matter in foreign policy

Joseph Nye
aspistrategist.org.au
Originally published 10 Jan 20

Here is the conclusion:

Good moral reasoning should be three-dimensional, weighing and balancing intentions, consequences and means. A foreign policy should be judged accordingly. Moreover, a moral foreign policy must consider consequences such as maintaining an institutional order that encourages moral interests, in addition to particular newsworthy actions such as helping a dissident or a persecuted group in another country. And it’s important to include the ethical consequences of ‘nonactions’, such as President Harry S. Truman’s willingness to accept stalemate and domestic political punishment during the Korean War rather than follow General Douglas MacArthur’s recommendation to use nuclear weapons. As Sherlock Holmes famously noted, much can be learned from a dog that doesn’t bark.

It’s pointless to argue that ethics will play no role in the foreign policy debates that await this year. We should acknowledge that we always use moral reasoning to judge foreign policy, and we should learn to do it better.

The info is here.

Thursday, October 25, 2018

The Little-known Emotion that Makes Ethical Leadership Contagious

Notre Dame Cetner for Ethics
www.ethicalleadership.nd.edu
Originally posted in September 2018

Here is an excerpt:

Elevation at Work

Elevation is not limited to dramatic and dangerous situations. It can also arise in more mundane places like assembly lines, meeting rooms, and corporate offices. In fact, elevation is a powerful and often under-appreciated force that makes ethical leadership work. A 2010 study collected data from workers about their feelings toward their supervisors and found that bosses could cause their followers to experience elevation through acts of fairness and self-sacrifice. Elevation caused these workers to have positive feelings toward their bosses, and the effect spilled over into other relationships; they were kinder and more helpful toward their coworkers and more committed to their organization as a whole.

These findings suggest that elevation is a valuable emotion for leaders to understand. It can give ethical leadership traction by helping a leader's values and behaviors take root in his or her followers. One study puts it this way: "Elevation puts moral values into action."

Put it in Practice

The best way to harness elevation in your organization is by changing the way you communicate about ethics. Keep these guidelines in mind.

Find exemplars who elevate you and others.

Most companies have codes of values. But true moral inspiration comes from people, not from abstract principles. Although we need rules, guidelines, regulations, and laws, we are only inspired by the people who embody them and live them out. For each of your organization's values, make sure you can identify a person who exemplifies it in his or her life and work.

The info is here.

Tuesday, August 14, 2018

The developmental and cultural psychology of free will

Tamar Kushnir
Philosophy Compass
Originally published July 12, 2018

Abstract

This paper provides an account of the developmental origins of our belief in free will based on research from a range of ages—infants, preschoolers, older children, and adults—and across cultures. The foundations of free will beliefs are in infants' understanding of intentional action—their ability to use context to infer when agents are free to “do otherwise” and when they are constrained. In early childhood, new knowledge about causes of action leads to new abilities to imagine constraints on action. Moreover, unlike adults, young children tend to view psychological causes (i.e., desires) and social causes (i.e., following rules or group norms, being kind or fair) of action as constraints on free will. But these beliefs change, and also diverge across cultures, corresponding to differences between Eastern and Western philosophies of mind, self, and action. Finally, new evidence shows developmentally early, culturally dependent links between free will beliefs and behavior, in particular when choice‐making requires self‐control.

Here is part of the Conclusion:

I've argued here that free will beliefs are early‐developing and culturally universal, and that the folk psychology of free will involves considering actions in the context of alternative possibilities and constraints on possibility. There are developmental differences in how children reason about the possibility of acting against desires, and there are both developmental and cultural differences in how children consider the social and moral limitations on possibility.  Finally, there is new evidence emerging for developmentally early, culturally moderated links between free will beliefs and willpower, delay of gratification, and self‐regulation.

The article is here.

Wednesday, May 9, 2018

How To Deliver Moral Leadership To Employees

John Baldoni
Forbes.com
Originally posted April 12, 2018

Here is an excerpt:

When it comes to moral authority there is a disconnect between what is expected and what is delivered. So what can managers do to fulfill their employees' expectations?

First, let’s cover what not to do – preach! Employees don’t want words; they want actions. They also do not expect to have to follow a particular religious creed at work. Just as with the separation of church and state, there is an implied separation in the workplace, especially now with employees of many different (or no) faiths. (There are exceptions within privately held, family-run businesses.)

LRN advocates doing two things: pause to reflect on the situation as a means of connecting with values and second act with humility. The former may be easier than the latter, but it is only with humility that leaders connect more realistically with others. If you act your title, you set up barriers to understanding. If you act as a leader, you open the door to greater understanding.

Dov Seidman, CEO of LRN, advises leaders to instill purpose, elevate and inspire individuals and live your values. Very importantly in this report, Seidman challenges leaders to embrace moral challenges as he says, by “constant wrestling with the questions of right and wrong, fairness and justice, and with ethical dilemmas.”

The information is here.