Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, July 31, 2017

Truth or Punishment: Secrecy and Punishing the Self

Michael L. Slepian and Brock Bastian
Personality and Social Psychology Bulletin
First Published July 14, 2017, 1–17

Abstract

We live in a world that values justice; when a crime is committed, just punishment is expected to follow. Keeping one’s misdeed secret therefore appears to be a strategic way to avoid (just) consequences. Yet, people may engage in self-punishment to right their own wrongs to balance their personal sense of justice. Thus, those who seek an escape from justice by keeping secrets may in fact end up serving that same justice on themselves (through self-punishment). Six studies demonstrate that thinking about secret (vs. confessed) misdeeds leads to increased self-punishment (increased denial of pleasure and seeking of pain). These effects were mediated by the feeling one deserved to be punished, moderated by the significance of the secret, and were observed for both self-reported and behavioral measures of self-punishment.

Here is an excerpt:

Recent work suggests, however, that people who are reminded of their own misdeeds will sometimes seek out their own justice. That is, even subtle acts of self-punishment can restore a sense of personal justice, whereby a wrong feels to have been righted (Bastian et al., 2011; Inbar et al., 2013). Thus,
we predicted that even though keeping a misdeed secret could lead one to avoid being punished by others, it still could prompt a desire for punishment all the same, one inflicted by the self.

The article is here.

Note: There are significant implications in this article for psychotherapists.

Is it dangerous to recreate flawed human morality in machines?

Alexandra Myers-Lewis
Wired.com
Originally published July 13, 2017

Here are two excerpts:

The need for ethical machines may be one of the defining issues of our time. Algorithms are created to govern critical systems in our society, from banking to medicine, but with no concept of right and wrong, machines cannot understand the repercussions of their actions. A machine has never thrown a punch in a schoolyard fight, cheated on a test or a relationship, or been rapt with the special kind of self-doubt that funds our cosmetic and pharmaceutical industries. Simply put, an ethical machine will always be an it - but how can it be more?

(cut)

A self-driving car wouldn't just have to make decisions in life-and-death situations - as if that wasn't enough - but would also need to judge how much risk is acceptable at any given time. But who will ultimately restrict this decision-making process? Would it be the job of the engineer to determine which circumstances it is acceptable to overtake a cyclist? You won't lose sleep pegging a deer over a goat. But a person? Choosing who potentially lives and dies based on a number has an inescapable air of dystopia. You may see tight street corners and hear the groan of oncoming traffic, but an algorithm will only see the world in numbers. These numbers will form its memories and its reason, the force that moves the car out into the road.

"I think people will be very uncomfortable with the idea of a machine deciding between life and death," Sütfeld says, "In this regard we believe that transparency and comprehensibility could be a very important factor to gain public acceptance of these systems. Or put another way, people may favour a transparent and comprehensible system over a more complex black-box system. We would hope that the people will understand this general necessity of a moral compass and that the discussion will be about what approach to take, and how such systems should decide. If this is put in, every car will make the same decision and if there is a good common ground in terms of model, this could improve public safety."

The article is here.

Sunday, July 30, 2017

Should we be afraid of AI?

Luciano Floridi
aeon
Originally published

Here is an excerpt:

True AI is not logically impossible, but it is utterly implausible. We have no idea how we might begin to engineer it, not least because we have very little understanding of how our own brains and intelligence work. This means that we should not lose sleep over the possible appearance of some ultraintelligence. What really matters is that the increasing presence of ever-smarter technologies is having huge effects on how we conceive of ourselves, the world, and our interactions. The point is not that our machines are conscious, or intelligent, or able to know something as we do. They are not. There are plenty of well-known results that indicate the limits of computation, so-called undecidable problems for which it can be proved that it is impossible to construct an algorithm that always leads to a correct yes-or-no answer.

We know, for example, that our computational machines satisfy the Curry-Howard correspondence, which indicates that proof systems in logic on the one hand and the models of computation on the other, are in fact structurally the same kind of objects, and so any logical limit applies to computers as well. Plenty of machines can do amazing things, including playing checkers, chess and Go and the quiz show Jeopardy better than us. And yet they are all versions of a Turing Machine, an abstract model that sets the limits of what can be done by a computer through its mathematical logic.

Quantum computers are constrained by the same limits, the limits of what can be computed (so-called computable functions). No conscious, intelligent entity is going to emerge from a Turing Machine. The point is that our smart technologies – also thanks to the enormous amount of available data and some very sophisticated programming – are increasingly able to deal with more tasks better than we do, including predicting our behaviours. So we are not the only agents able to perform tasks successfully.

Engineering Eden: The quest for eternal life

Kristin Kostick
Baylor College of Medicine
Originally posted June 2,2017

If you’re like most people, you may associate the phrase “eternal life” with religion: The promise that we can live forever if we just believe in God. You probably don’t associate the phrase with an image of scientists working in a lab, peering at worms through microscopes or mice skittering through boxes. But you should.

The quest for eternal life has only recently begun to step out from behind the pews and into the petri dish.

I recently discussed the increasing feasibility of the transhumanist vision due to continuing advancements in biotech, gene- and cell-therapies. These emerging technologies, however, don’t erase the fact that religion – not science – has always been our salve for confronting death’s inevitability. For believers, religion provides an enduring mechanism (belief and virtue) behind the perpetuity of existence, and shushes our otherwise frantic inability to grasp: How can I, as a person, just end?

The Mormon transhumanist Lincoln Cannon argues that science, rather than religion, offers a tangible solution to this most basic existential dilemma. He points out that it is no longer tenable to believe in eternal life as only available in heaven, requiring the death of our earthly bodies before becoming eternal, celestial beings.

Would a rational person choose to believe in an uncertain, spiritual afterlife over the tangible persistence of one’s own familiar body and the comforting security of relationships we’ve fostered over a lifetime of meaningful interactions?

The article is here.

Saturday, July 29, 2017

On ethics, Trump is leading America in the wrong direction

Jeffrey D. Sachs
CNN.com
Originally published July 26, 2017

Here is an excerpt:

So here we are. Bribes are no longer bribes, campaign funds from corporations are free speech, and the politicians are just being good public servants when they accept money from those who seek their favor. Crooked politicians are thrilled; the rest of us look on shocked at the pageantry of cynicism and immorality. Senior officials in law-abiding countries have told me they can hardly believe their eyes as to what is underway in the United States.

Which brings us to Donald Trump. Trump seems to know no limits whatsoever in his commingling of the public interest and his personal business interests. He failed to give up his ownership interest in his businesses upon taking office. (Trump resigned from positions in his companies and said his two sons are in charge.)

Government and Republican Party activities have been booked into Trump properties. Trump campaign funds are used to hire lawyers to defend Donald Trump Jr. in the Russia probe. Campaign associates such as Paul Manafort and Michael Flynn have been under scrutiny for their business dealings with clients tied to foreign governments.

In response to the stench, the former head of the government ethics office recently resigned, declaring that the United States is "pretty close to a laughingstock at this point." The resignation was not remarkable under the circumstances. What was remarkable is that most Republicans politicians remain mum to these abuses. Of course too many politicians of both parties are deeply compromised by financial dependence on corporate campaign donors.

The article is here.

Trump Has Plunged Nation Into ‘Ethics Crisis,’ Ex-Watchdog Says

Britain Eakin
Courthouse News Service
Originally published July 28, 2017

The government’s former top ethics chief sounded the alarm Friday, saying the first eight months of the Trump administration have been “an absolute shock to the system” that has plunged the nation into “an ethics crisis.”

Walter Shaub Jr. resigned July 6 after months of clashes with the White House over issues such as President Trump’s refusal to divest his businesses and the administration’s delay in disclosing ethics waivers for appointees.

As he left office he told NPR that “the current situation has made it clear that the ethics program needs to be stronger than it is.”

He did not elaborate at that time on what about the “situation” so troubled him, but he said at the Campaign Legal Center, he would have more freedom “to push for reform” while broadening his focus to ethics issues at all levels of government.

During a talk at the National Press Club Friday morning, Shaub said the president and other administration officials have departed from ethical principles and norms as part of a broader assault on the American representative form of government.

Shaub said he is “extremely concerned” by this.

“The biggest concern is that norms evolve. So if we have a shock to the system, what we’re experiencing now could become the new norm,” Shaub said.

The article is here.

Ethics and Governance AI Fund funnels $7.6M to Harvard, MIT and independent research efforts

Devin Coldewey
Tech Crunch
Originally posted July 11, 2017

A $27 million fund aimed at applying artificial intelligence to the public interest has announced the first targets for its beneficence: $7.6 million will be split unequally between MIT’s Media Lab, Harvard’s Berkman Klein Center and seven smaller research efforts around the world.

The Ethics and Governance of Artificial Intelligence Fund was created by Reid Hoffman, Pierre Omidyar and the Knight Foundation back in January; the intention was to ensure that “social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers” have a say in how AI is developed and deployed.

To that end, this first round of fundings supports existing organizations working along those lines, as well as nurturing some newer ones.

The lion’s share of this initial round, $5.9 million, will be split by MIT and Harvard, as the initial announcement indicated. Media Lab is, of course, on the cutting edge of many research efforts in AI and elsewhere; Berkman Klein focuses more on the legal and analysis side of things.

The fund’s focuses are threefold:

  • Media and information quality – looking at how to understand and control the effects of autonomous information systems and “influential algorithms” like Facebook’s news feed.
  • Social and criminal justice – perhaps the area where the bad influence of AI-type systems could be the most insidious; biases in data and interpretation could be baked into investigative and legal systems, giving them the illusion of objectivity. (Obviously the fund seeks to avoid this.)
  • Autonomous cars – although this may seem incongruous with the others, self-driving cars represent an immense social opportunity. Mobility is one of the most influential social-economic factors, and its reinvention offers a chance to improve the condition of nearly everyone on the planet — great potential for both advancement and abuse.

Friday, July 28, 2017

You are fair, but I expect you to also behave unfairly

Positive asymmetry in trait-behavior relations for moderate morality information

Patrice Rusconi, Simona Sacchi, Roberta Capellini, Marco Brambilla, Paolo Cherubini
PLOS One
Published: July 11, 2017

Summary: People who are believed to be immoral are unable to reverse individuals' perception of them, potentially resulting in difficulties in the workplace and barriers in accessing fair and equal treatment in the legal system.

Abstract

Trait inference in person perception is based on observers’ implicit assumptions about the relations between trait adjectives (e.g., fair) and the either consistent or inconsistent behaviors (e.g., having double standards) that an actor can manifest. This article presents new empirical data and theoretical interpretations on people’ behavioral expectations, that is, people’s perceived trait-behavior relations along the morality (versus competence) dimension. We specifically address the issue of the moderate levels of both traits and behaviors almost neglected by prior research by using a measure of the perceived general frequency of behaviors. A preliminary study identifies a set of competence- and morality-related traits and a subset of traits balanced for valence. Studies 1–2 show that moral target persons are associated with greater behavioral flexibility than immoral ones where abstract categories of behaviors are concerned. For example, participants judge it more likely that a fair person would behave unfairly than an unfair person would behave fairly. Study 3 replicates the results of the first 2 studies using concrete categories of behaviors (e.g., telling the truth/omitting some information). Study 4 shows that the positive asymmetry in morality-related trait-behavior relations holds for both North-American and European (i.e., Italian) individuals. A small-scale meta-analysis confirms the existence of a positive asymmetry in trait-behavior relations along both morality and competence dimensions for moderate levels of both traits and behaviors. We discuss these findings in relation to prior models and results on trait-behavior relations and we advance a motivational explanation based on self-protection.

The article is here.

Note: This research also applies to perceptions in psychotherapy and in family relationships.

I attend, therefore I am

Carolyn Dicey Jennings
Aeon.com
Originally published July 10, 2017

Here is an excerpt:

Following such considerations, the philosopher Daniel Dennett proposed that the self is simply a ‘centre of narrative gravity’ – just as the centre of gravity in a physical object is not a part of that object, but a useful concept we use to understand the relationship between that object and its environment, the centre of narrative gravity in us is not a part of our bodies, a soul inside of us, but a useful concept we use to make sense of the relationship between our bodies, complete with their own goals and intentions, and our environment. So, you, you, are a construct, albeit a useful one. Or so goes Dennett’s thinking on the self.

And it isn’t just Dennett. The idea that there is a substantive self is passé. When cognitive scientists aim to provide an empirical account of the self, it is simply an account of our sense of self – why it is that we think we have a self. What we don’t find is an account of a self with independent powers, responsible for directing attention and resolving conflicts of will.

There are many reasons for this. One is that many scientists think that the evidence counts in favour of our experience in general being epiphenomenal – something that does not influence our brain, but is influenced by it. In this view, when you experience making a tough decision, for instance, that decision was already made by your brain, and your experience is mere shadow of that decision. So for the very situations in which we might think the self is most active – in resolving difficult decisions – everything is in fact already achieved by the brain.

The article is here.