Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Killing. Show all posts
Showing posts with label Killing. Show all posts

Tuesday, February 15, 2022

How do people use ‘killing’, ‘letting die’ and related bioethical concepts? Contrasting descriptive and normative hypotheses

Rodríguez-Arias, D., et al., (2009)
Bioethics 34(5)
DOI:10.1111/bioe.12707

Abstract

Bioethicists involved in end-of-life debates routinely distinguish between ‘killing’ and ‘letting die’. Meanwhile, previous work in cognitive science has revealed that when people characterize behaviour as either actively ‘doing’ or passively ‘allowing’, they do so not purely on descriptive grounds, but also as a function of the behaviour’s perceived morality. In the present report, we extend this line of research by examining how medical students and professionals (N = 184) and laypeople (N = 122) describe physicians’ behaviour in end-of-life scenarios. We show that the distinction between ‘ending’ a patient’s life and ‘allowing’ it to end arises from morally motivated causal selection. That is, when a patient wishes to die, her illness is treated as the cause of death and the doctor is seen as merely allowing her life to end. In contrast, when a patient does not wish to die, the doctor’s behaviour is treated as the cause of death and, consequently, the doctor is described as ending the patient’s life. This effect emerged regardless of whether the doctor’s behaviour was omissive (as in withholding treatment) or commissive (as in applying a lethal injection). In other words, patient consent shapes causal selection in end-of-life situations, and in turn determines whether physicians are seen as ‘killing’ patients, or merely as ‘enabling’ their death.

From the Discussion

Across three  cases of  end-of-life  intervention, we find  convergent evidence  that moral  appraisals shape behavior description (Cushman et al., 2008) and causal selection (Alicke, 1992; Kominsky et al., 2015). Consistent  with  the  deontic  hypothesis,  physicians  who  behaved  according  to  patients’  wishes  were described as allowing the patient’s life to end. In contrast, physicians who disregarded the patient’s wishes were  described  as  ending the  patient’s  life.  Additionally,  patient  consent  appeared  to  inform  causal selection: The doctor was seen as the cause of death when disregarding the patient’s will; but the illness was seen as the cause of death when the doctor had obeyed the patient’s will.

Whether the physician’s behavior was omissive or commissive did not play a comparable role in behavior description or causal  selection. First, these  effects were weaker  than those of patient consent. Second,  while the  effects  of  consent  generalized to  medical  students  and  professionals,  the  effects of commission arose only among lay respondents. In other words, medical students and professionals treated patient consent as the sole basis for the doing/allowing distinction.  

Taken together, these  results confirm that  doing and  allowing serve a  fundamentally evaluative purpose (in  line with  the deontic  hypothesis,  and Cushman  et al.,  2008),  and only  secondarily serve  a descriptive purpose, if at all. 

Tuesday, September 29, 2020

We Don’t Know How to Warn You Any Harder. America is Dying.

Umair Haque
eand.co
Originally poste 29 Aug 20

Right about now, something terrible is happening in America. Society is one tiny step away from the final collapse of democracy, at the hands of a true authoritarian, and his fanatics. Meanwhile, America’s silent majority is still slumbering at the depth and gravity of the threat.

I know that strikes many of you as somehow wrong. So let me challenge you for a moment. How much experience do you really have with authoritarianism? Any? If you’re a “real” American, you have precisely none.

Take it from us survivors and scholars of authoritarianism. This is exactly how it happens. The situation could not — could not — be any worse. The odds are now very much against American democracy surviving.

If you don’t believe me, ask a friend. I invite everyone who’s lived under authoritarianism to comment. Those of us how have?

We survivors of authoritarianism have a terrible, terrible foreboding, because we are experiencing something we should never do: deja vu. Our parents fled from collapsing societies to America. And here, now, in a grim and eerie repeat of history, we see the scenes of our childhoods played out all over again. Only now, in the land that we came to. We see the stories our parents recounted to us happening before our eyes, only this time, in the place they brought us to, to escape from all those horrors, abuses, and depredations.

(cut)

There is a crucial lesson there. America already has an ISIS, a Taliban, an SS waiting to be born. A group of young men willing to do violence at the drop of a hat, because they’ve been brainwashed into hating. The demagogue has blamed hated minorities and advocates of democracy and peace for those young men’s stunted life chances, and they believe him. That’s exactly what an ISIS is, what a Taliban is, what an SS is. The only thing left to do by an authoritarian is to formalize it.

But when radicalized young men are killing people they have been taught to hate by demagogues right in the open, on the streets — a society has reached the beginnings of sectarian violence, the kind familiar in the Islamic world, and is at the end of democracy’s road.

The info is here.

Thursday, January 31, 2019

A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values

Caroline Lester
The New Yorker
Originally posted January 24, 2019

Here is an excerpt:

The U.S. government has clear guidelines for autonomous weapons—they can’t be programmed to make “kill decisions” on their own—but no formal opinion on the ethics of driverless cars. Germany is the only country that has devised such a framework; in 2017, a German government commission—headed by Udo Di Fabio, a former judge on the country’s highest constitutional court—released a report that suggested a number of guidelines for driverless vehicles. Among the report’s twenty propositions, one stands out: “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” When I sent Di Fabio the Moral Machine data, he was unsurprised by the respondent’s prejudices. Philosophers and lawyers, he noted, often have very different understandings of ethical dilemmas than ordinary people do. This difference may irritate the specialists, he said, but “it should always make them think.” Still, Di Fabio believes that we shouldn’t capitulate to human biases when it comes to life-and-death decisions. “In Germany, people are very sensitive to such discussions,” he told me, by e-mail. “This has to do with a dark past that has divided people up and sorted them out.”

The info is here.

Monday, October 22, 2018

Trump's 'America First' Policy Puts Economy Before Morality

Zeke Miller, Jonathan Lemire, and Catherine Lucey
www.necn.com
Originally posted October 18, 20198

Here is an excerpt:

Still, Trump's transactional approach isn't sitting well with some of his Republican allies in Congress. His party for years championed the idea that the U.S. had a duty to promote U.S. values and human rights and even to intervene when they are challenged. Some Republicans have urged Trump not to abandon that view.

"I'm open to having Congress sit down with the president if this all turns out to be true, and it looks like it is, ... and saying, 'How can we express our condemnation without blowing up the Middle East?" Sen. John Kennedy, R-La., said. "Our foreign policy has to be anchored in values."

Trump dismisses the notion that he buddies up to dictators, but he does not express a sense that U.S. leadership extends beyond the U.S. border.

In an interview with CBS' "60 Minutes" that aired Sunday, he brushed aside his own assessment that Putin was "probably" involved in assassinations and poisonings.

"But I rely on them," he said. "It's not in our country."

Relations between the U.S. and Saudi Arabia are complex. The two nations are entwined on energy, military, economic and intelligence issues. The Trump administration has aggressively courted the Saudis for support of its Middle East agenda to counter Iranian influence, fight extremism and try to forge peace between Israel and the Palestinians.

The info is here.

Tuesday, August 7, 2018

Google’s AI ethics won't curb war by algorithm

Phoebe Braithwaite
Wired.com
Originally published July 5, 2018

Here is an excerpt:

One of these programmes is Project Maven, which trains artificial intelligence systems to parse footage from surveillance drones in order to “extract objects from massive amounts of moving or still imagery,” writes Drew Cukor, chief of the Algorithmic Warfare Cross-Functional Team. The programme is a key element of the US army’s efforts to select targets. One of the companies working on Maven is Google. Engineers at Google have protested their company’s involvement; their peers at companies like Amazon and Microsoft have made similar complaints, calling on their employers not to support the development of the facial recognition tool Rekognition, for use by the military, police and immigration control. For technology companies, this raises a question: should they play a role in governments’ use of force?

The US government’s policy of using armed drones to hunt its enemies abroad has long been controversial. Gibson argues that the CIA and US military are using drones to strike “far from the hot battlefield, against communities that aren't involved in an armed conflict, based on intelligence that is quite frequently wrong”. Paul Scharre, director of the technology and national security programme at the Center for a New American Security and author of Army of None says that the use of drones and computing power is making the US military a much more effective and efficient force that kills far fewer civilians than in previous wars. “We actually need tech companies like Google helping the military to do many other things,” he says.

The article is here.

Sunday, March 25, 2018

Did Iraq Ever Become A Just War?

Matt Peterson
The Atlantic
Originally posted March 24, 2018

Here is an excerpt:

There’s a broader sense of moral confusion about the conduct of America’s wars. In Iraq, what started as a war of choice came to resemble much more a war of necessity. Can a war that started unjustly ever become righteous? Or does the stain permanently taint anything that comes after it?

The answers to these questions come from the school of philosophy called “just war” theory, which tries to explain whether and when war is permissible, and under what circumstances. It offers two big ways to think about the justice of war. One is whether it’s appropriate to go to war in the first place. Take North Korea, for example. Is there a cause worth killing thousands—millions—of North and South Korean civilians over? Invoking “national security” isn’t enough to make a war just. Kim Jong Un’s nuclear weapons pose an obvious threat to South Korea, Japan, and the United States. But that alone doesn’t make war an acceptable choice, given the lives at stake. The ethics of war require the public to assess how certain it is that innocents will be killed if the military doesn’t act (Will Kim really use his nukes offensively?), whether there’s any way to remove the threat without violence (Has diplomacy been exhausted?), and whether the scale of the deaths that would come from intervention is truly in line with the danger war is meant to avert (If the peninsula has to be burned down to be saved, is it really worth it?)—among other considerations.

The other questions to ask are about the nature of the combat. Are soldiers taking care to target only North Korea’s military? Once the decision has been made that Kim’s nuclear weapons pose an imminent threat, hypothetically, that still wouldn’t make it acceptable to firebomb Pyongyang to turn the population against him. Similarly, American forces could not, say, blow up a bus full of children just because one of Kim’s generals was trying to escape on it.

The article is here.

Wednesday, January 10, 2018

Our enemies are human: that’s why we want to kill them

Tage Rai, Piercarlo Valdesolo, and Jesse Graham
aeon.co
Originally posted December 13, 2017

Here are two excerpts:

What we found was that dehumanising victims predicts support for instrumental violence, but not for moral violence. For example, Americans who saw Iraqi civilians as less human were more likely to support drone strikes in Iraq. In this case, no one wants to kill innocent civilians, but if they die as collateral damage in the pursuit of killing ISIS terrorists, dehumanising them eases our guilt. In contrast, seeing ISIS terrorists as less human predicted nothing about support for drone strikes against them. This is because people want to hurt and kill terrorists. Without their humanity, how could terrorists be guilty, and how could they feel the pain that they deserve?

(cut)

Many people believe that it is only a breakdown in our moral sensibilities that causes violence. To reduce violence, according to this argument, we need only restore our sense of morality by generating empathy toward victims. If we could just see them as fellow human beings, then we would do them no harm. Yet our research suggests that this is untrue. In cases of moral violence, our experiments suggest that it is the engagement of our moral sense, not its disengagement, that often causes aggression. When Myanmar security forces plant landmines at the Bangladesh border in an attempt to kill the Rohingya minorities who are trying to escape the slaughter, the primary driver of their behaviour is not dehumanisation, but rather moral outrage toward an enemy conceptualised as evil, but also completely human.

The article is here.

Saturday, December 9, 2017

The Root of All Cruelty

Paul Bloom
The New Yorker
Originally published November 20, 2017

Here are two excerpts:

Early psychological research on dehumanization looked at what made the Nazis different from the rest of us. But psychologists now talk about the ubiquity of dehumanization. Nick Haslam, at the University of Melbourne, and Steve Loughnan, at the University of Edinburgh, provide a list of examples, including some painfully mundane ones: “Outraged members of the public call sex offenders animals. Psychopaths treat victims merely as means to their vicious ends. The poor are mocked as libidinous dolts. Passersby look through homeless people as if they were transparent obstacles. Dementia sufferers are represented in the media as shuffling zombies.”

The thesis that viewing others as objects or animals enables our very worst conduct would seem to explain a great deal. Yet there’s reason to think that it’s almost the opposite of the truth.

(cut)

But “Virtuous Violence: Hurting and Killing to Create, Sustain, End, and Honor Social Relationships” (Cambridge), by the anthropologist Alan Fiske and the psychologist Tage Rai, argues that these standard accounts often have it backward. In many instances, violence is neither a cold-blooded solution to a problem nor a failure of inhibition; most of all, it doesn’t entail a blindness to moral considerations. On the contrary, morality is often a motivating force: “People are impelled to violence when they feel that to regulate certain social relationships, imposing suffering or death is necessary, natural, legitimate, desirable, condoned, admired, and ethically gratifying.” Obvious examples include suicide bombings, honor killings, and the torture of prisoners during war, but Fiske and Rai extend the list to gang fights and violence toward intimate partners. For Fiske and Rai, actions like these often reflect the desire to do the right thing, to exact just vengeance, or to teach someone a lesson. There’s a profound continuity between such acts and the punishments that—in the name of requital, deterrence, or discipline—the criminal-justice system lawfully imposes. Moral violence, whether reflected in legal sanctions, the killing of enemy soldiers in war, or punishing someone for an ethical transgression, is motivated by the recognition that its victim is a moral agent, someone fully human.

The article is here.

Sunday, April 10, 2016

The Paradox of Nonlethal Weapons

Fritz Allhoff
Law and Bioethics Blog
Originally published March 10, 2016

Here are two excerpts:

These are all examples of lethal weapons. Importantly, though, there are myriad restrictions on the use of nonlethal weapons as well. And this gives rise to what I’ll call the “paradox of nonlethal weapons.” The paradox is simply that, sometimes, international law allows soldiers to kill, but not to disable. Or, in other words, some nonlethal weapons may be prohibited, while, at the same time, some lethal weaponry is not. As Donald Rumsfeld put it, “in many instances, our forces are allowed to shoot somebody and kill them, but they’re not allowed to use a nonlethal riot control agent.”

(cut)

Regardless of the specific technologies, though, the general question is this: why should there be limits on nonlethal weapons at the same time that lethal weapons are allowed? This leads to the curious—and perhaps perverse—outcome that enemy combatants can be killed, but not even temporarily disabled.

The article is here.

Friday, November 13, 2015

Why Self-Driving Cars Must Be Programmed to Kill

Emerging Technology From the arXiv
MIT Technology Review
Originally published October 22, 2015

Here is an excerpt:

One way to approach this kind of problem is to act in a way that minimizes the loss of life. By this way of thinking, killing one person is better than killing 10.

But that approach may have other consequences. If fewer people buy self-driving cars because they are programmed to sacrifice their owners, then more people are likely to die because ordinary cars are involved in so many more accidents. The result is a Catch-22 situation.

Bonnefon and co are seeking to find a way through this ethical dilemma by gauging public opinion. Their idea is that the public is much more likely to go along with a scenario that aligns with their own views.

The entire article is here.