Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, May 9, 2017

Ethics experts question Kushner relatives pushing White House connections in China

Allan Smith
Business Insider
Originally published May 8, 2017

Ethics experts criticized White House senior adviser Jared Kushner's relatives for using White House connections to enhance a presentation to Chinese investors last weekend.

Members of Kushner's family gave multiple presentations in China detailing an opportunity to "invest $500,000 and immigrate to the United States" through a controversial visa program and promoting ties to Kushner and President Donald Trump, according to media reports.

Richard Painter, who was President George W. Bush's top ethics lawyer from 2005 to 2007 and is now a professor at the University of Minnesota, told Business Insider the presentation was "obviously completely inappropriate."

He added that the Kushner family "ought to be disqualified" from the EB-5 visa program they were promoting. The visa is awarded to foreign investors who invest at least $500,000 in US projects that create at least 10 full-time jobs.

The article is here.

Inside Libratus, the Poker AI That Out-Bluffed the Best Humans

Cade Metz
Wired Magazine
Originally published February 1, 2017

Here is an excerpt:

Libratus relied on three different systems that worked together, a reminder that modern AI is driven not by one technology but many. Deep neural networks get most of the attention these days, and for good reason: They power everything from image recognition to translation to search at some of the world’s biggest tech companies. But the success of neural nets has also pumped new life into so many other AI techniques that help machines mimic and even surpass human talents.

Libratus, for one, did not use neural networks. Mainly, it relied on a form of AI known as reinforcement learning, a method of extreme trial-and-error. In essence, it played game after game against itself. Google’s DeepMind lab used reinforcement learning in building AlphaGo, the system that that cracked the ancient game of Go ten years ahead of schedule, but there’s a key difference between the two systems. AlphaGo learned the game by analyzing 30 million Go moves from human players, before refining its skills by playing against itself. By contrast, Libratus learned from scratch.

Through an algorithm called counterfactual regret minimization, it began by playing at random, and eventually, after several months of training and trillions of hands of poker, it too reached a level where it could not just challenge the best humans but play in ways they couldn’t—playing a much wider range of bets and randomizing these bets, so that rivals have more trouble guessing what cards it holds. “We give the AI a description of the game. We don’t tell it how to play,” says Noam Brown, a CMU grad student who built the system alongside his professor, Tuomas Sandholm. “It develops a strategy completely independently from human play, and it can be very different from the way humans play the game.”

The article is here.

Monday, May 8, 2017

Improving Ethical Culture by Measuring Stakeholder Trust

Phillip Nichols and Patricia Dowden
Compliance and Ethics Blog
Originally posted April 10, 2017

Here is an excerpt:

People who study how individuals behave in organizations find that norms are far more powerful than formal rules, even formal rules that are backed up by legal sanctions.[ii] Thus, a norm that guides people to not steal is going to be more effective than a formal rule that prohibits stealing. Therein lies the benefit to a business firm. A strong ethical culture will be far more effective than formal rules (although of course there is still a need for formal rules).

When the “ethical culture” component of a business firm’s overall culture is strong – when norms and other things guide people in that firm to make sound ethical and social decisions – the firm benefits in two ways: it enhances the positive and controls the negative. In terms of enhancing the positive,  a strong ethical culture increases the amount of loyalty and commitment that people associated with a business firm have towards that firm. A strong ethical culture also contributes to higher levels of job satisfaction. People who are loyal and committed to a business firm are more likely to make “sacrifices” for that firm, meaning they are more likely to do things like working late or on weekends in order to get a project done, or help another department when that department needs extra help. People who are loyal and committed to a firm are more likely to defend that firm against accusers, and to stand by the firm in times of crisis. Workers who have high levels of job satisfaction are more likely to stay with a firm, and are more likely to refer customers to that firm and to recruit others to work for that firm.

The blog post is here.

Raising good robots

Regina Rini
aeon.com
Originally published April 18, 2017

Intelligent machines, long promised and never delivered, are finally on the horizon. Sufficiently intelligent robots will be able to operate autonomously from human control. They will be able to make genuine choices. And if a robot can make choices, there is a real question about whether it will make moral choices. But what is moral for a robot? Is this the same as what’s moral for a human?

Philosophers and computer scientists alike tend to focus on the difficulty of implementing subtle human morality in literal-minded machines. But there’s another problem, one that really ought to come first. It’s the question of whether we ought to try to impose our own morality on intelligent machines at all. In fact, I’d argue that doing so is likely to be counterproductive, and even unethical. The real problem of robot morality is not the robots, but us. Can we handle sharing the world with a new type of moral creature?

We like to imagine that artificial intelligence (AI) will be similar to humans, because we are the only advanced intelligence we know. But we are probably wrong. If and when AI appears, it will probably be quite unlike us. It might not reason the way we do, and we could have difficulty understanding its choices.

The article is here.

Sunday, May 7, 2017

Individual Differences in Moral Disgust Do Not Predict Utilitarian Judgments, Sexual and Pathogen Disgust Do

Michael Laakasuo, Jukka Sundvall & Marianna Drosinou
Scientific Reports 7, Article number: 45526 (2017)
doi:10.1038/srep45526

Abstract

The role of emotional disgust and disgust sensitivity in moral judgment and decision-making has been debated intensively for over 20 years. Until very recently, there were two main evolutionary narratives for this rather puzzling association. One of the models suggest that it was developed through some form of group selection mechanism, where the internal norms of the groups were acting as pathogen safety mechanisms. Another model suggested that these mechanisms were developed through hygiene norms, which were piggybacking on pathogen disgust mechanisms. In this study we present another alternative, namely that this mechanism might have evolved through sexual disgust sensitivity. We note that though the role of disgust in moral judgment has been questioned recently, few studies have taken disgust sensitivity to account. We present data from a large sample (N = 1300) where we analyzed the associations between The Three Domain Disgust Scale and the most commonly used 12 moral dilemmas measuring utilitarian/deontological preferences with Structural Equation Modeling. Our results indicate that of the three domains of disgust, only sexual disgust is associated with more deontological moral preferences. We also found that pathogen disgust was associated with more utilitarian preferences. Implications of the findings are discussed.

The article is here.

Saturday, May 6, 2017

Investigating Altruism and Selfishness Through the Hypothetical Use of Superpowers

Ahuti Das-Friebel, Nikita Wadhwa, Merin Sanil, Hansika Kapoor, Sharanya V.
Journal of Humanistic Psychology 
First published date: April-13-2017
10.1177/0022167817699049

Abstract

Drawing from literature associating superheroes with altruism, this study examined whether ordinary individuals engaged in altruistic or selfish behavior when they were hypothetically given superpowers. Participants were presented with six superpowers—three positive (healing, invulnerability, and flight) and three negative (fear inducement, psychic persuasion, and poison generation). They indicated their desirability for each power, what they would use it for (social benefit, personal gain, social harm), and listed examples of such uses. Quantitative analyses (n = 285) revealed that 94% of participants wished to possess a superpower, and majority indicated using powers for benefiting themselves than for altruistic purposes. Furthermore, while men wanted positive and negative powers more, women were more likely than men to use such powers for personal and social gain. Qualitative analyses of the uses of the powers (n = 524) resulted in 16 themes of altruistic and selfish behavior. Results were analyzed within Pearce and Amato’s model of helping behavior, which was used to classify altruistic behavior, and adapted to classify selfish behavior. In contrast to how superheroes behave, both sets of analyses revealed that participants would hypothetically use superpowers for selfish rather than altruistic purposes. Limitations and suggestions for future research are outlined.

The article is here.

Friday, May 5, 2017

When Therapists Make Mistakes

Keely Kolmes
drkolmes.com
Originally published August 10, 2009

We don’t often talk about therapeutic blunders, although they happen all the time. There are so many ways for therapists to fail clients. There is probably the most common: a mismatch of styles, or a therapist who is not really helping her client. Then there are those moments when perhaps we fail our clients by not responding in the moment in the way the client might desire. Maybe we sometimes challenge when we should nurture. Or we nurture when we should challenge. Or we may do any number of subtle things, perhaps below the threshold of consciousness, not even fully acknowledged by our clients, but which create distance, disappointment, or detachment. Some examples of this are the stifling of yawns, spacing out for a moment, or failing to remember an important name or detail and the client feels we are not really fully present or engaged with them. This lack of connection may trigger feelings of disappointment, loss, or abandonment. For clients with relational traumas, events such as vacations, emergencies, or even adjustments in session times may also cause feelings of loss and abandonment.

Recently, I was having one of those weeks. The details aren’t important, but I’ll acknowledge that I had taken on a few too many things. Top it off with having a few people needing to meet at different times. Add to that one way I manage client confidentiality: putting client names into my hard calendar (which I do not carry about with me) and then transcribing the sessions later to my iPhone calender simply as “client,” to preserve confidentiality in the event that my phone is lost or stolen.

The result?

The blog post is here.

The Duty to be Morally Enhanced

Persson, I. & Savulescu, J.
Topoi (2017)
doi:10.1007/s11245-017-9475-7

Abstract

We have a duty to try to develop and apply safe and cost-effective means to increase the probability that we shall do what we morally ought to do. It is here argued that this includes biomedical means of moral enhancement, that is, pharmaceutical, neurological or genetic means of strengthening the central moral drives of altruism and a sense of justice. Such a strengthening of moral motivation is likely to be necessary today because common-sense morality having its evolutionary origin in small-scale societies with primitive technology will become much more demanding if it is revised to serve the needs of contemporary globalized societies with an advanced technology capable of affecting conditions of life world-wide for centuries to come.

The article is here.

Thursday, May 4, 2017

Rude Doctors, Rude Nurses, Rude Patients

Perri Klass
The New York Times
Originally published April 10, 2017

Here is an excerpt:

None of that is a surprise, and in fact, there is a good deal of literature to suggest that the medical environment includes all kinds of harshness, and that much of the rudeness you encounter as a doctor or nurse is likely to come from colleagues and co-workers.  An often-cited British study from 2015 called “Sticks and Stones” reported that rude, dismissive and aggressive communication between doctors (inevitably abbreviated, in a medical journal, as RDA communication) affected 31 percent of doctors several times a week or more. The researchers found that rudeness was more common from certain medical specialties: radiology, general surgery, neurosurgery and cardiology. They also established that higher status was somewhat protective; junior doctors and trainees encountered more rudeness.

In the United States, a number of studies have looked at how rudeness affects medical students and medical residents, as part of tracking the different ways in which they are often mistreated.

One article earlier this year in the journal Medical Teacher charted the effect on medical student morale of a variety of experiences, including verbal and nonverbal mistreatment, by everyone from attending physicians to residents to nurses. Mistreatment of medical students, the authors argued, may actually reflect serious problems on the part of their teachers, such as burnout, depression or substance abuse; it’s not enough to classify the “perpetrators” (that is, the rude people) as unprofessional and tell them to stop.

The article is here.