Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, April 11, 2018

How One Bad Employee Can Corrupt a Whole Team

Stephen Dimmock and William C. Gerken
Harvard Business Review
Originally posted March 5, 2018

One bad apple, the saying goes, can ruin the bunch. So, too, with employees.

Our research on the contagiousness of employee fraud tells us that even your most honest employees become more likely to commit misconduct if they work alongside a dishonest individual. And while it would be nice to think that the honest employees would prompt the dishonest employees to better choices, that’s rarely the case.

Among co-workers, it appears easier to learn bad behavior than good.

For managers, it is important to realize that the costs of a problematic employee go beyond the direct effects of that employee’s actions — bad behaviors of one employee spill over into the behaviors of other employees through peer effects. By under-appreciating these spillover effects, a few malignant employees can infect an otherwise healthy corporate culture.

History — and current events — are littered with outbreaks of misconduct among co-workers: mortgage underwriters leading up to the financial crisis, stock brokers at boiler rooms such as Stratton Oakmont, and cross-selling by salespeople at Wells Fargo.

The information is here.

Tuesday, April 10, 2018

Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?

Lily Frank and Sven Nyholm
Artificial Intelligence and Law
September 2017, Volume 25, Issue 3, pp 305–323

Abstract

The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics; the relationship between consent and free will; and the relationship between consent and consciousness. Additionally we canvass the most influential existing literature on the ethics of sex with robots.

The article is here.

Should We Root for Robot Rights?

Evan Selinger
Medium.com
Originally posted February 15, 2018

Here is an excerpt:

Maybe there’s a better way forward — one where machines aren’t kept firmly in their machine-only place, humans don’t get wiped out Skynet-style, and our humanity isn’t sacrificed by giving robots a better deal.

While the legal challenges ahead may seem daunting, they pose enticing puzzles for many thoughtful legal minds, who are even now diligently embracing the task. Annual conferences like We Robot — to pick but one example — bring together the best and the brightest to imagine and propose creative regulatory frameworks that would impose accountability in various contexts on designers, insurers, sellers, and owners of autonomous systems.

From the application of centuries-old concepts like “agency” to designing cutting-edge concepts for drones and robots on the battlefield, these folks are ready to explore the hard problems of machines acting with varying shades of autonomy. For the foreseeable future, these legal theories will include clear lines of legal responsibility for the humans in the loop, particularly those who abuse technology either intentionally or though carelessness.

The social impacts of our seemingly insatiable need to interact with our devices have been drawing accelerated attention for at least a decade. From the American Academy of Pediatrics creating recommendations for limiting screen time to updating etiquette and social mores for devices while dining, we are attacking these problems through both institutional and cultural channels.

The article is here.

Monday, April 9, 2018

Use Your Brain: Artificial Intelligence Isn't Close to Replacing It

Leonid Bershidsky
Bloomberg.com
Originally posted March 19, 2018

Nectome promises to preserve the brains of terminally ill people in order to turn them into computer simulations -- at some point in the future when such a thing is possible. It's a startup that's easy to mock. 1  Just beyond the mockery, however, lies an important reminder to remain skeptical of modern artificial intelligence technology.

The idea behind Nectome is known to mind uploading enthusiasts (yes, there's an entire culture around the idea, with a number of wealthy foundations backing the research) as "destructive uploading": A brain must be killed to map it. That macabre proposition has resulted in lots of publicity for Nectome, which predictably got lumped together with earlier efforts to deep-freeze millionaires' bodies so they could be revived when technology allows it. Nectome's biggest problem, however, isn't primarily ethical.

The company has developed a way to embalm the brain in a way that keeps all its synapses visible with an electronic microscope. That makes it possible to create a map of all of the brain's neuron connections, a "connectome." Nectome's founders believe that map is the most important element of the reconstructed human brain and that preserving it should keep all of a person's memories intact. But even these mind uploading optimists only expect the first 10,000-neuron network to be reconstructed sometime between 2021 and 2024.

The information is here.

Do Evaluations Rise With Experience?

Kieran O’Connor and Amar Cheema
Psychological Science 
First Published March 1, 2018

Abstract

Sequential evaluation is the hallmark of fair review: The same raters assess the merits of applicants, athletes, art, and more using standard criteria. We investigated one important potential contaminant in such ubiquitous decisions: Evaluations become more positive when conducted later in a sequence. In four studies, (a) judges’ ratings of professional dance competitors rose across 20 seasons of a popular television series, (b) university professors gave higher grades when the same course was offered multiple times, and (c) in an experimental test of our hypotheses, evaluations of randomly ordered short stories became more positive over a 2-week sequence. As judges completed repeated evaluations, they experienced more fluent decision making, producing more positive judgments (Study 4 mediation). This seemingly simple bias has widespread and impactful consequences for evaluations of all kinds. We also report four supplementary studies to bolster our findings and address alternative explanations.

The article is here.

Sunday, April 8, 2018

Can Bots Help Us Deal with Grief?

Evan Selinger
Medium.com
Originally posted March 13, 2018

Here are two excerpts:

Muhammad is under no illusion that he’s speaking with the dead. To the contrary, Muhammad is quick to point out the simulation he created works well when generating scripts of predictable answers, but it has difficulty relating to current events, like a presidential election. In Muhammad’s eyes, this is a feature, not a bug.

Muhammad said that “out of good conscience” he didn’t program the simulation to be surprising, because that capability would deviate too far from the goal of “personality emulation.”

This constraint fascinates me. On the one hand, we’re all creatures of habit. Without habits, people would have to deliberate before acting every single time. This isn’t practically feasible, so habits can be beneficial when they function as shortcuts that spare us from paralysis resulting from overanalysis.

(cut)

The empty chair technique that I’m referring to was popularized by Friedrich Perls (more widely known as Fritz Perls), a founder of Gestalt therapy. The basic setup looks like this: Two chairs are placed near each other; a psychotherapy patient sits in one chair and talks to the other, unoccupied chair. When talking to the empty chair, the patient engages in role-playing and acts as if a person is seated right in front of her — someone to whom she has something to say. After making a statement, launching an accusation, or asking a question, the patient then responds to herself by taking on the absent interlocutor’s perspective.

In the case of unresolved parental issues, the dialog could have the scripted format of the patient saying something to her “mother,” and then having her “mother” respond to what she said, going back and forth in a dialog until something that seems meaningful happens. The prop of an actual chair isn’t always necessary, and the context of the conversations can vary. In a bereavement context, for example, a widow might ask the chair-as-deceased-spouse for advice about what to do in a troubling situation.

The article is here.

Saturday, April 7, 2018

The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?

Sven Nyholm and Jilles Smids
Ethical Theory and Moral Practice
November 2016, Volume 19, Issue 5, pp 1275–1289

Abstract

Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars.

The article is here.

Friday, April 6, 2018

Complaint: Allina ignored intern’s sexual harassment allegations

Barbara L. Jones
Minnesota Lawyer
Originally published March 7, 2018

Here is an excerpt:

Abel’s complaint stems from the practicum at Abbott, partially under Gottlieb’s supervision.  She began her practicum in September 2015. According to the complaint, she immediately encountered sexualized conversation with Gottlieb and he attempted to control any conversations she and other students had with anybody other than him.

On her first day at the clinic, Gottlieb took students outside and instructed Abel to lie down in the street, ostensibly to measure a parking space. She refused and Gottlieb told her that “obeying” him would be an area for growth. When speaking with other people, he frequently referred to Abel, of Asian-Indian descent, as “the graduate student of color” or “the brown one.”  He also refused to provide her with access to the IT chart system, forcing her to ask him for “favors,” the complaint alleges. Gottlieb repeatedly threatened to fire Abel and other students from the practicum, the complaint said.

Gottlieb spent time in individual supervision sessions with Abel and also group sessions that involved role play. He told students to mimic having sex with him in his role as therapist and tell him he was good in bed, the complaint states. At these times he sometimes had a visible erection, the complaint also says. Abel raised these and other concerns but was brushed off by Abbott personnel, her complaint alleges.  Abel asked Dr. Michael Schmitz, the clinical director of hospital-based psychology services, for help but was told that she had to be “emotionally tough” and put up with Gottlieb, the complaint continues. She sought some assistance from Finch, whose job was to assist Gottlieb in the clinical psychology training program and supervise interns.  Gottlieb was displeased and threatening about her discussions with Schmitz and Finch, the complaint says.

The article is here.

Schools are a place for students to grow morally and emotionally — let's encourage them

William Eidtson
The Hill
Originally posted March 10, 2018

Here is an excerpt:

However, if schools were truly a place for students to grow “emotionally and morally,” wouldn’t engaging in a demonstration of solidarity to protest the all too recurrent slaughter of concertgoers, church assemblies, and schoolchildren be one of the most emotionally engaging and morally relevant activities they could undertake?

And if life is all about choices and consequences, wouldn’t the choice to allow students to engage in one of the most cherished traditions of our democracy — namely, political dissent — potentially result in a profound and historically significant educational experience?

The fact is that our educational institutions are often not places that foster emotional and moral growth within students. Why? Part of the reason is because while our schools are pretty good at teaching students how to do things, they fail at teaching why things matter.

School officials tend to assume that if you simply teach students how things work, the “why it’s important” will naturally follow. But this is precisely the opposite of how we learn and grow in the world. People need reasons, stories, and context to direct their skills.

We need the why to give us a context to understand and use the how. We need the why to give us good reasons to learn the how. The why makes the how relevant. The why makes the how endurable. The why makes the how possible.

The article is here.