Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, April 18, 2018

Is There A Difference Between Ethics And Morality In Business?

Bruce Weinstein
Forbes.com
Originally published February 23, 2018

Here is an excerpt:

In practical terms, if you use both “ethics” and “morality” in conversation, the people you’re speaking with will probably take issue with how you’re using these terms, even if they believe they’re distinct in some way.

The conversation will then veer from whatever substantive ethical point you were trying to make (“Our company has an ethical and moral responsibility to hire and promote only honest, accountable people”) to an argument about the meaning of the words “ethical” and “moral.” I had plenty of those arguments as a graduate student in philosophy, but is that the kind of discussion you really want to have at a team meeting or business conference?

You can do one of three things, then:

1. Use “ethics” and “morality” interchangeably only when you’re speaking with people who believe they’re synonymous.

2. Choose one term and stick with it.

3. Minimize the use of both words and instead refer to what each word is broadly about: doing the right thing, leading an honorable life and acting with high character.

As a professional ethicist, I’ve come to see #3 as the best option. That way, I don’t have to guess whether the person I’m speaking with believes ethics and morality are identical concepts, which is futile when you’re speaking to an audience of 5,000 people.

The information is here.

Note: I do not agree with everything in this article, but it is worth contemplating.

Why it’s a bad idea to break the rules, even if it’s for a good cause

Robert Wiblin
80000hours.org
Originally posted March 20, 2018

How honest should we be? How helpful? How friendly? If our society claims to value honesty, for instance, but in reality accepts an awful lot of lying – should we go along with those lax standards? Or, should we attempt to set a new norm for ourselves?

Dr Stefan Schubert, a researcher at the Social Behaviour and Ethics Lab at Oxford University, has been modelling this in the context of the effective altruism community. He thinks people trying to improve the world should hold themselves to very high standards of integrity, because their minor sins can impose major costs on the thousands of others who share their goals.

In addition, when a norm is uniquely important to our situation, we should be willing to question society and come up with something different and hopefully better.

But in other cases, we can be better off sticking with whatever our culture expects, both to save time, avoid making mistakes, and ensure others can predict our behaviour.

The key points and podcast are here.

Tuesday, April 17, 2018

Planning Complexity Registers as a Cost in Metacontrol

Kool, W., Gershman, S. J., & Cushman, F. A. (in press). Planning complexity registers as a
cost in metacontrol. Journal of Cognitive Neuroscience.

Abstract

Decision-making algorithms face a basic tradeoff between accuracy and effort (i.e., computational demands). It is widely agreed that humans have can choose between multiple decision-making processes that embody different solutions to this tradeoff: Some are computationally cheap but inaccurate, while others are computationally expensive but accurate. Recent progress in understanding this tradeoff has been catalyzed by formalizing it in terms of model-free (i.e., habitual) versus model-based (i.e., planning) approaches to reinforcement learning. Intuitively, if two tasks offer the same rewards for accuracy but one of them is much more demanding, we might expect people to rely on habit more in the difficult task: Devoting significant computation to achieve slight marginal accuracy gains wouldn’t be “worth it”. We test and verify this prediction in a sequential RL task. Because our paradigm is amenable to formal analysis, it contributes to the development of a computational model of how people balance the costs and benefits of different decision-making processes in a task-specific manner; in other words, how we decide when hard thinking is worth it.

The research is here.

Building A More Ethical Workplace Culture

PYMNTS
PYMNTS.com
Originally posted March 20, 2018

Here is an excerpt:

The Worst News

Among the positive findings in the report was the fact that reporting is on the rise by a whole 19 percent, with 69 percent of employees stating they had reported misconduct in the last two years.

But that number, Harned said, comes with a bitter side note. Retaliation has also spiked during the same time period, with 44 percent reporting it – up from 22 percent two years ago.

The rate of retaliation going up faster than the rate of reporting, Harned noted, is disturbing.

“That is a very real problem for employees, and I think over the last year, we’ve seen what a huge problem it has become for employers.”

The door-to-door on retaliation for reporting is short – about three weeks on average. That is just about the time it takes for firms – even those serious about doing a good job with improving compliance – to get any investigation up and organized.

“By then, the damage is already done,” said Harned. “We are better at seeing misconduct, but we aren’t doing enough to prevent it from happening – especially because retaliation is such a big problem.”

There are not easy solutions, Harned noted, but the good news – even in the face of the worst news – is that improvement is possible, and is even being logged in some segments. Employees, she stated, mostly come in the door with a moral compass to call their own, and want to work in environments that are healthy, not vicious.

“The answer is culture is everything: Companies need to constantly communicate to employees that conduct is the expectation for all levels of the organization, and that breaking those rules will always have consequences.”

The post is here.

Monday, April 16, 2018

The Seth Rich lawsuit matters more than the Stormy Daniels case

Jill Abramson
The Guardian
Originally published March 20, 2018

Here is an excerpt:

I’ve previously written about Fox News’ shameless coverage of the 2016 unsolved murder of a young former Democratic National Committee staffer named Seth Rich. Last week, ABC News reported that his family has filed a lawsuit against Fox, charging that several of its journalists fabricated a vile story attempting to link the hacked emails from Democratic National Committee computers to Rich, who worked there.

After the fabricated story ran on the Fox website, it was retracted, but not before various on-air stars, especially Trump mouthpiece Sean Hannity, flogged the bogus conspiracy theory suggesting Rich had something to do with the hacked messages.

This shameful episode demonstrated, once again, that Rupert Murdoch’s favorite network, and Trump’s, has no ethical compass and had no hesitation about what grief this manufactured story caused to the 26-year-old murder victim’s family. It’s good to see them striking back, since that is the only tactic that the Murdochs and Trumps of the world will respect or, perhaps, will force them to temper the calumny they spread on a daily basis.

Of course, the Rich lawsuit does not have the sex appeal of the Stormy case. The rightwing echo chamber will brazenly ignore its self-inflicted wounds. And, for the rest of the cable pundit brigades, the DNC emails and Rich are old news.

The article is here.

Psychotherapy Is 'The' Biological Treatment

Robert Berezin
Medscape.com
Originally posted March 16, 2018

Neuroscience surprisingly teaches us that not only is psychotherapy purely biological, but it is the only real biological treatment. It addresses the brain in the way it actually develops, matures, and operates. It follows the principles of evolutionary adaptation. It is consonant with genetics. And it specifically heals the problematic adaptations of the brain in precisely the ways that they evolved in the first place. Psychotherapy deactivates maladaptive brain mappings and fosters new and constructive pathways. Let me explain.

The operations of the brain are purely biological. The brain maps our experiences and memories through the linking of trillions of neuronal connections. These interconnected webs create larger circuits that map all throughout the architecture of the cortex. This generates high-level symbolic neuronal maps that take form as images in our consciousness. The play of consciousness is the highest level of symbolic form. It is a living theater of "image-ination," a representational world that consists of a cast of characters who relate together by feeling as well as scenarios, plots, set designs, and landscape.

As we adapt to our environment, the brain maps our emotional experience through cortical memory. This starts very early in life. If a baby is startled by a loud noise, his arms and legs will flail. His heart pumps adrenaline, and he cries. This "startle" maps a fight-or-flight response in his cortex, which is mapped through serotonin and cortisol. The baby is restored by his mother's holding. Her responsive repair once again re-establishes and maintains his well-being, which is mapped through oxytocin. These ongoing formative experiences of life are mapped into memory in precisely these two basic ways.

The article is here.

Sunday, April 15, 2018

What If There Is No Ethical Way to Act in Syria Now?

Sigal Samel
The Atlantic
Originally posted April 13, 2018

For seven years now, America has been struggling to understand its moral responsibility in Syria. For every urgent argument to intervene against Syrian President Bashar al-Assad to stop the mass killing of civilians, there were ready responses about the risks of causing more destruction than could be averted, or even escalating to a major war with other powers in Syria. In the end, American intervention there has been tailored mostly to a narrow perception of American interests in stopping the threat of terror. But the fundamental questions are still unresolved: What exactly was the moral course of action in Syria? And more urgently, what—if any—is the moral course of action now?

The war has left roughly half a million people dead—the UN has stopped counting—but the question of moral responsibility has taken on new urgency in the wake of a suspected chemical attack over the weekend. As President Trump threatened to launch retaliatory missile strikes, I spoke about America’s ethical responsibility with some of the world’s leading moral philosophers. These are people whose job it is to ascertain the right thing to do in any given situation. All of them suggested that, years ago, America might have been able to intervene in a moral way to stop the killing in the Syrian civil war. But asked what America should do now, they all gave the same startling response: They don’t know.

The article is here.

What’s Next for Humanity: Automation, New Morality and a ‘Global Useless Class’

Kimiko de Freytas-Tamura
The New York Times
Originally published March 19, 2018

What will our future look like — not in a century but in a mere two decades?

Terrifying, if you’re to believe Yuval Noah Harari, the Israeli historian and author of “Sapiens” and “Homo Deus,” a pair of audacious books that offer a sweeping history of humankind and a forecast of what lies ahead: an age of algorithms and technology that could see us transformed into “super-humans” with godlike qualities.

In an event organized by The New York Times and How To Academy, Mr. Harari gave his predictions to the Times columnist Thomas L. Friedman. Humans, he warned, “have created such a complicated world that we’re no longer able to make sense of what is happening.” Here are highlights of the interview.

Artificial intelligence and automation will create a ‘global useless class.’

Just as the Industrial Revolution created the working class, automation could create a “global useless class,” Mr. Harari said, and the political and social history of the coming decades will revolve around the hopes and fears of this new class. Disruptive technologies, which have helped bring enormous progress, could be disastrous if they get out of hand.

“Every technology has a good potential and a bad potential,” he said. “Nuclear war is obviously terrible. Nobody wants it. The question is how to prevent it. With disruptive technology the danger is far greater, because it has some wonderful potential. There are a lot of forces pushing us faster and faster to develop these disruptive technologies and it’s very difficult to know in advance what the consequences will be, in terms of community, in terms of relations with people, in terms of politics.”

The article is here.

The video is worth watching.

Please read Sapiens and Homo Deus by Yuval Harari.

Saturday, April 14, 2018

The AI Cargo Cult: The Myth of a Superhuman AI

Kevin Kelly
www.wired.com
Originally published April 25, 2017

Here is an excerpt:

The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

The information is here.