Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, October 30, 2018

How Trump’s Hateful Speech Raises the Risks of Violence

Cass Sunstein
Bloomberg.com
Originally posted October 28, 2018

Here is an excerpt:

Is President Donald Trump responsible, in some sense, for the mailing of bombs to Hillary Clinton and other Democratic leaders? Is he responsible, in some sense, for the slaughter at the Pittsburgh synagogue?

If we are speaking in terms of causation, the most reasonable answer to both questions, and the safest, is: We don’t really know. More specifically, we don’t know whether these particular crimes would have occurred in the absence of Trump’s hateful and vicious rhetoric (including his enthusiasm for the despicable cry, “Lock her up!”).

But it’s also safe, and plenty reasonable, to insist that across the American population, hateful and vicious rhetoric from the president of the United States is bound to increase risks of violence. Because of that rhetoric, the likelihood of this kind of violence is greater than it would otherwise be. The president is responsible for elevating the risk that people will try to kill Democrats and others seen by some of his followers as “enemies of the people” (including journalists and Jews).

To see why, we should investigate one of the most striking findings in modern social psychology that has been replicated on dozens of occasions. It goes by the name of “group polarization.”

The basic idea is that when people are listening and talking to one another, they tend to end up in a more extreme position in the same direction of the views with which they began. Groups of like-minded people can become radicalized.

The info is here.

West Virginia Poll examines moral and social issues

Brad McElhinny
wvmetronews.com
Originally posted September 30, 2018

Here is an excerpt:

Role of God in morality

There was a 50-50 split in a question asking respondents to select the statement that best reflects their view of the role of God in morality.

Half responded, “It is not necessary to believe in God in order to be moral and have good values.”

The other half of respondents chose the option “It is necessary to believe in God in order to be moral and have good values.”

“The two big, significant differences are younger people and self-identified conservatives who have opposite points of view on this question,” said professional pollster Rex Repass, the author of the West Virginia Poll.

Of younger people — those between ages 18 and 34 — 60 percent said it’s not necessary to believe in God to have good moral and ethical values.

That compared to 35 percent of those ages 55-64 who answered with that statement.

“So generally, if you’re under 35, you’re more likely to say it’s not necessary to say have a higher being in your life to have good values,” Repass said.

“If you’re older that percentage increases. You’re more likely to believe you have to have God in your life to be moral and have good values.”

Of respondents who labeled themselves as conservative, 73 percent said it is necessary to believe in God to have moral values.

The info is here.

Monday, October 29, 2018

We hold people with power to account. Why not algorithms?

Hannah Fry
The Guardian
Originally published September 17, 2018

Here is an excerpt:

But already in our hospitals, our schools, our shops, our courtrooms and our police stations, artificial intelligence is silently working behind the scenes, feeding on our data and making decisions on our behalf. Sure, this technology has the capacity for enormous social good – it can help us diagnose breast cancer, catch serial killers, avoid plane crashes and, as the health secretary, Matt Hancock, has proposed, potentially save lives using NHS data and genomics. Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.

If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong.

Back in 2012, a group of 16 Idaho residents with disabilities received some unexpected bad news. The Department of Health and Welfare had just invested in a “budget tool” – a swish piece of software, built by a private company, that automatically calculated their entitlement to state support. It had declared that their care budgets should be slashed by several thousand dollars each, a decision that would put them at serious risk of being institutionalised.

The problem was that the budget tool’s logic didn’t seem to make much sense. While this particular group of people had deep cuts to their allowance, others in a similar position actually had their benefits increased by the machine. As far as anyone could tell from the outside, the computer was essentially plucking numbers out of thin air.

The info is here.

The dismantling of informed consent is a disaster

David Penner
KevinMD.com
Originally posted September 26, 2018

Informed consent is the cornerstone of medical ethics. And every physician must defend this sacred principle from every form of evil that would seek to dismantle, degrade and debase it. If informed consent is the sun, then privacy, confidentiality, dignity, and trust are planets that go around it. For without informed consent, the descent of health care into amorality is inevitable, and the doctor-patient relationship is doomed to ruination, oblivion, and despair. It is also important to acknowledge the fact that a lack of informed consent has become endemic to our health care system.

This betrayal of patient trust is inextricably linked to three violations: a rape of the body, a rape of the mind and a rape of the soul. The rape of the mind is anchored in a willful nondisclosure of common long-term side effects associated with powerful drugs, such as opioids and certain types of chemotherapy. When a patient starts a chemotherapy regimen, they are typically briefed by a nurse, who proceeds to educate them regarding common short-term side effects such as mouth sores, constipation, and nausea, while failing to mention any of the typical long-term side effects, such as cognitive difficulties and early menopause. It is the long-term side effects that underscore the tragedy of having to resort to chemotherapy, as they can have a devastating impact on a patient’s quality of life, even long after remission has been attained.

The info is here.

Sunday, October 28, 2018

Moral enhancement and the good life

Hazem Zohny
Med Health Care and Philos (2018).
https://doi.org/10.1007/s11019-018-9868-4

Abstract

One approach to defining enhancement is in the form of bodily or mental changes that tend to improve a person’s well-being. Such a “welfarist account”, however, seems to conflict with moral enhancement: consider an intervention that improves someone’s moral motives but which ultimately diminishes their well-being. According to the welfarist account, this would not be an instance of enhancement—in fact, as I argue, it would count as a disability. This seems to pose a serious limitation for the account. Here, I elaborate on this limitation and argue that, despite it, there is a crucial role for such a welfarist account to play in our practical deliberations about moral enhancement. I do this by exploring four scenarios where a person’s motives are improved at the cost of their well-being. A framework emerges from these scenarios which can clarify disagreements about moral enhancement and help sharpen arguments for and against it.

The article is here.

Saturday, October 27, 2018

Obtaining consensus in psychotherapy: What holds us back?

Goldfried, M.R.
American Psychologist
2018

Abstract

Although the field of psychotherapy has been in existence for well over a century, it nonetheless continues to be preparadigmatic, lacking a consensus or scientific core. Instead, it is characterized by a large and increasing number of different schools of thought. In addition to the varying ways in which psychotherapy has been conceptualized, there also exists a long-standing gap between psychotherapy research and how it is conducted in actual clinical practice. Finally, there also exists a tendency to place great emphasis on what is new, often rediscovering or reinventing past contributions. This article describes each of these impediments to obtaining consensus and offers some suggestions for what might be done to address them.

Here is an excerpt:

There are at least three problematic issues that seem to contribute to the difficulty we have in obtaining a consensus within the field of psychotherapy: The first involves our long-standing practice of solely working within theoretical orientations or eclectic combinations of orientations. Moreover, not agreeing with those having other frameworks on how to bring about therapeutic change results in the proliferation of schools of therapy (Goldfried, 1980). The second issue involves the longstanding gap between research and practice, where many therapists may fail to see the relevance to their day-to-day clinical practice and also where many researchers do not make systematic use of clinical observations as a means of guiding their research (Goldfried, 1982).2 The third issue is our tendency to neglect past contributions to the field (Goldfried, 2000). We do not build on our previous body of knowledge but rather rediscover what we already know or—even worse—ignore past work and replace it with something new. What follows is a description of how these three issues prevent psychotherapy from achieving a consensus, after which there will be a consideration of some possible steps that might be taken in working toward a resolution of these issues.

The article is here, behind a paywall.

Friday, October 26, 2018

Ethics, a Psychological Perspective

Andrea Dobson
www.infoq.com
Originally posted September 22, 2018

Key Takeaways
  • With emerging technologies like machine learning, developers can now achieve much more than ever before. But this new power has a down side. 
  • When we talk about ethics - the principles that govern a person's behaviour - it is impossible to not talk about psychology. 
  • Processes like obedience, conformity, moral disengagement, cognitive dissonance and moral amnesia all reveal why, though we see ourselves as inherently good, in certain circumstances we are likely to behave badly.
  • Recognising that although people aren’t rational, they are to a large degree predictable, has profound implications on how tech and business leaders can approach making their organisations more ethical.
  • The strongest way to make a company more ethical is to start with the individual. Companies become ethical one person at a time, one decision at a time. We all want to be seen as good people, known as our moral identity, which comes with the responsibility to have to act like it.

The Ethics Of Transhumanism And The Cult Of Futurist Biotech

Julian Vigo
Forbes.com
Originally posted September 24, 2018

Here is an excerpt:

The philosophical tenets, academic theories, and institutional practices of transhumanism are well-known. Max More, a British philosopher and leader of the extropian movement claims that transhumanism is the “continuation and acceleration of the evolution of intelligent life beyond its currently human form and human limitations by means of science and technology, guided by life-promoting principles and values.” This very definition, however, is a paradox since the ethos of this movement is to promote life through that which is not life, even by removing pieces of life, to create something billed as meta-life. Indeed, it is clear that transhumanism banks on its own contradiction: that life is deficient as is, yet can be bettered by prolonging life even to the detriment of life.

Stefan Lorenz Sorgner is a German philosopher and bioethicist who has written widely on the ethical implications of transhumanism to include writings on cryonics and longevity of human life, all of which which go against most ecological principles given the amount of resources needed to keep a body in “suspended animation” post-death. At the heart of Sorgner’s writings, like those of Kyle Munkittrick, invoke an almost naïve rejection of death, noting that death is neither “natural” nor a part of human evolution. In fact, much of the writings on transhumanism take a radical approach to technology: anyone who dare question that cutting off healthy limbs to make make way for a super-Olympian sportsperson would be called a Luddite, anti-technology. But that is a false dichotomy since most critics of transhumanism are not against all technology, but question the ethics of any technology that interferes with the human rights of humans.

The info is here.

Thursday, October 25, 2018

Novartis links bonuses to ethics in bid to rebuild reputation

John Miller
Reuters
Originally posted September 17, 2018

Swiss drugmaker Novartis (NOVN.S) has revealed its employees only get a bonus if they meet or exceed expectations for ethical behavior as it seeks to address past shortcomings that have damaged its reputation.

Chief Executive Vas Narasimhan has made strengthening the Swiss drugmaker’s ethics culture a priority after costly bribery scandals or legal settlements in South Korea, China and the United States.

Employees now receive a 1, 2 or a 3 score on their values and behavior. Receiving a 2, which Novartis said denotes meeting expectations, or a 3, for “role model” behavior, would make them eligible for a bonus of up to 35 percent of their total compensation.

Novartis said it began the scoring system in 2016 but details have not been widely reported. Company officials outlined the system on Monday on a call about its ethics efforts with analysts and journalists.

The info is here.