Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, August 31, 2017

Stress Leads to Bad Decisions. Here’s How to Avoid Them

Ron Carucci
Harvard Business Review
Originally posted August 29, 2017

Here is an excerpt:

Facing high-risk decisions. 

For routine decisions, most leaders fall into one of two camps: The “trust your gut” leader makes highly intuitive decisions, and the “analyze everything” leader wants lots of data to back up their choice. Usually, a leader’s preference for one of these approaches poses minimal threat to the decision’s quality. But the stress caused by a high-stakes decision can provoke them to the extremes of their natural inclination. The highly intuitive leader becomes impulsive, missing critical facts. The highly analytical leader gets paralyzed in data, often failing to make any decision. The right blend of data and intuition applied to carefully constructing a choice builds the organization’s confidence for executing the decision once made. Clearly identify the risks inherent in the precedents underlying the decision and communicate that you understand them. Examine available data sets, identify any conflicting facts, and vet them with appropriate stakeholders (especially superiors) to make sure your interpretations align. Ask for input from others who’ve faced similar decisions. Then make the call.

Solving an intractable problem. 

To a stressed-out leader facing a chronic challenge, it often feels like their only options are to either (1) vehemently argue for their proposed solution with unyielding certainty, or (2) offer ideas very indirectly to avoid seeming domineering and to encourage the team to take ownership of the challenge. The problem, again, is that neither extreme works. If people feel the leader is being dogmatic, they will disengage regardless of the merits of the idea. If they feel the leader lacks confidence in the idea, they will struggle to muster conviction to try it, concluding, “Well, if the boss isn’t all that convinced it will work, I’m not going to stick my neck out.”

The article is here.

Wednesday, August 30, 2017

Vignette 36: The Cancellation Conundrum

Dr. Wendy Malik operates an independent practice in a suburban area.  She receives a referral from a physician, with whom she has a positive working relationship.  Dr. Malik contacts the patient, completes a phone screening, and sets up an appointment with Mr. Larry David.

As is her practice, Dr. Malik sends a confirmation email, attaching her version of informed consent.  She instructs Mr. David that he does not have to print it out, only review it and they would discuss any questions at the initial appointment.

Several days later, Dr. Malik checks her email.  In it, Mr. David sent her an email with an attachment.  Mr. David asks Dr. Malik to review his edits on the informed consent document.

While Dr. Malik notes some suggested corrections on the document, Mr. David modified the cancellation policy.  Dr. Malik’s form (and standard policy) is appointments cancelled with less than 24-hour notice will be charged to the patient.  Mr. David added a sentence that if Dr. Malik cancels an appointment with less than 24 hours, Mr. David expects Dr. Malik to pay him an amount equal to her hourly rate.

Flustered by this edit, Dr. Malik contacts you for a consultation.

What are the ethical issues involved in this case?

What are the pertinent clinical issues in this case?

How would you help Dr. Malik work through these issues?

Would you recommend Dr. Malik call to address the issue ahead of the appointment or wait for the initial session?

At this point, must Dr. Malik keep Mr. David as a patient?

If not, does Dr. Malik need to contact her referral source about the issue?

Fat Shaming in the Doctor's Office Can Be Mentally and Physically Harmful

American Psychological Association
Press Release from August 3, 2017

Medical discrimination based on people’s size and negative stereotypes of overweight people can take a toll on people’s physical health and well-being, according to a review of recent research presented at the 125th Annual Convention of the American Psychological Association.

“Disrespectful treatment and medical fat shaming, in an attempt to motivate people to change their behavior, is stressful and can cause patients to delay health care seeking or avoid interacting with providers,” presenter Joan Chrisler, PhD, a professor of psychology at Connecticut College, said during a symposium titled “Weapons of Mass Distraction — Confronting Sizeism.”

Sizeism can also have an effect on how doctors medically treat patients, as overweight people are often excluded from medical research based on assumptions about their health status, Chrisler said, meaning the standard dosage for drugs may not be appropriate for larger body sizes. Recent studies have shown frequent under-dosing of overweight patients who were prescribed antibiotics and chemotherapy, she added.

“Recommending different treatments for patients with the same condition based on their weight is unethical and a form of malpractice,” Chrisler said. “Research has shown that doctors repeatedly advise weight loss for fat patients while recommending CAT scans, blood work or physical therapy for other, average weight patients.”

In some cases, providers might not take fat patients’ complaints seriously or might assume that their weight is the cause of any symptoms they experience, Chrisler added. “Thus, they could jump to conclusions or fail to run appropriate tests, which results in misdiagnosis,” she said.

The pressor is here.

Tuesday, August 29, 2017

Must science be testable?

Massimo Pigliucci
Aeon
Originally published August 10, 2016

Here is an excerpt:

hat said, the publicly visible portion of the physics community nowadays seems split between people who are openly dismissive of philosophy and those who think they got the pertinent philosophy right but their ideological opponents haven’t. At stake isn’t just the usually tiny academic pie, but public appreciation of and respect for both the humanities and the sciences, not to mention millions of dollars in research grants (for the physicists, not the philosophers). Time, therefore, to take a more serious look at the meaning of Popper’s philosophy and why it is still very much relevant to science, when properly understood.

As we have seen, Popper’s message is deceptively simple, and – when repackaged in a tweet – has in fact deceived many a smart commentator in underestimating the sophistication of the underlying philosophy. If one were to turn that philosophy into a bumper sticker slogan it would read something like: ‘If it ain’t falsifiable, it ain’t science, stop wasting your time and money.’

But good philosophy doesn’t lend itself to bumper sticker summaries, so one cannot stop there and pretend that there is nothing more to say. Popper himself changed his mind throughout his career about a number of issues related to falsification and demarcation, as any thoughtful thinker would do when exposed to criticisms and counterexamples from his colleagues. For instance, he initially rejected any role for verification in establishing scientific theories, thinking that it was far too easy to ‘verify’ a notion if one were actively looking for confirmatory evidence. Sure enough, modern psychologists have a name for this tendency, common to laypeople as well as scientists: confirmation bias.

Nonetheless, later on Popper conceded that verification – especially of very daring and novel predictions – is part of a sound scientific approach. After all, the reason Einstein became a scientific celebrity overnight after the 1919 total eclipse is precisely because astronomers had verified the predictions of his theory all over the planet and found them in satisfactory agreement with the empirical data.

The article is here.

The Influence of (Dis)belief in Free Will on Immoral Behavior

Caspar, E. A., Vuillaume, L., Magalhães De Saldanha da Gama, P. A. and Cleeremans, A.
Frontiers in Psychology, 17 January 2017

Abstract

One of the hallmarks of human existence is that we all hold beliefs that determine how we act. Amongst such beliefs, the idea that we are endowed with free will appears to be linked to prosocial behaviors, probably by enhancing the feeling of responsibility of individuals over their own actions. However, such effects appear to be more complex that one might have initially thought. Here, we aimed at exploring how induced disbeliefs in free will impact the sense of agency over the consequences of one’s own actions in a paradigm that engages morality. To do so, we asked participants to choose to inflict or to refrain from inflicting an electric shock to another participant in exchange of a small financial benefit. Our results show that participants who were primed with a text defending neural determinism – the idea that humans are a mere bunch of neurons guided by their biology – administered fewer shocks and were less vindictive toward the other participant. Importantly, this finding only held for female participants. These results show the complex interaction between gender, (dis)beliefs in free will and moral behavior.

From the Conclusion:

To conclude, we observed that disbelief in free will had a positive impact on the morality of decisions toward others. The present work extends previous research by showing that additional factors, such as gender, could influence the impact of (dis)belief in free will on prosocial and antisocial behaviors. Our results also showed that previous results relative to the (moral) context underlying the paradigm in use are not always replicated.

The research is here.

Monday, August 28, 2017

Maintaining cooperation in complex social dilemmas using deep reinforcement learning

Adam Lerer and Alexander Peysakhovich
(2017)

Abstract

In social dilemmas individuals face a temptation to increase their payoffs in the short run at a cost to the long run total welfare. Much is known about how cooperation can be stabilized in the simplest of such settings: repeated Prisoner’s Dilemma games. However, there is relatively little work on generalizing these insights to more complex situations. We start to fill this gap by showing how to use modern reinforcement learning methods to generalize a highly successful Prisoner’s Dilemma strategy: tit-for-tat. We construct artificial agents that act in ways that are simple to understand, nice (begin by cooperating), provokable (try to avoid being exploited), and forgiving (following a bad turn try to return to mutual cooperation). We show both theoretically and experimentally that generalized tit-for-tat agents can maintain cooperation in more complex environments. In contrast, we show that employing purely reactive training techniques can lead to agents whose behavior results in socially inefficient outcomes.

The paper is here.

Death Before Dishonor: Incurring Costs to Protect Moral Reputation

Andrew J. Vonasch, Tania Reynolds, Bo M. Winegard, Roy F. Baumeister
Social Psychological and Personality Science 
First published date: July-21-2017

Abstract

Predicated on the notion that people’s survival depends greatly on participation in cooperative society, and that reputation damage may preclude such participation, four studies with diverse methods tested the hypothesis that people would make substantial sacrifices to protect their reputations. A “big data” study found that maintaining a moral reputation is one of people’s most important values. In making hypothetical choices, high percentages of “normal” people reported preferring jail time, amputation of limbs, and death to various forms of reputation damage (i.e., becoming known as a criminal, Nazi, or child molester). Two lab studies found that 30% of people fully submerged their hands in a pile of disgusting live worms, and 63% endured physical pain to prevent dissemination of information suggesting that they were racist. We discuss the implications of reputation protection for theories about altruism and motivation.

The article is here.

Sometimes giving a person a choice is an act of terrible cruelty

Lisa Tessman
aeon.com
Originally posted August 9, 2017

It is not always good to have the opportunity to make a choice. When we must decide to take one action rather than another, we also, ordinarily, become at least partly responsible for what we choose to do. Usually this is appropriate; it’s what makes us the kinds of creatures who can be expected to abide by moral norms. 

Sometimes, making a choice works well. For instance, imagine that while leaving the supermarket parking lot you accidentally back into another car, visibly denting it. No one else is around, nor do you think there are any surveillance cameras. You face a choice: you could drive away, fairly confident that no one will ever find out that you damaged someone’s property, or you could leave a note on the dented car’s windshield, explaining what happened and giving contact information, so that you can compensate the car’s owner.

Obviously, the right thing to do is to leave a note. If you don’t do this, you’ve committed a wrongdoing that you could have avoided just by making a different choice. Even though you might not like having to take responsibility – and paying up – it’s good to be in the position of being able to do the right thing.

Yet sometimes, having a choice means deciding to commit one bad act or another. Imagine being a doctor or nurse caught in the following fictionalised version of real events at a hospital in New Orleans in the aftermath of Hurricane Katrina in 2005. Due to a tremendous level of flooding after the hurricane, the hospital must be evacuated. The medical staff have been ordered to get everyone out by the end of the day, but not all patients can be removed. As time runs out, it becomes clear that you have a choice, but it’s a choice between two horrifying options: euthanise the remaining patients without consent (because many of them are in a condition that renders them unable to give it) or abandon them to suffer a slow, painful and terrifying death alone. Even if you’re anguished at the thought of making either choice, you might be confident that one action – let’s say administering a lethal dose of drugs – is better than the other. Nevertheless, you might have the sense that no matter which action you perform, you’ll be violating a moral requirement.

Sunday, August 27, 2017

Will Trump Be the Death of the Goldwater Rule?

Jeannie Suk Gersen
The New Yorker
Originally posted August 23, 2017

Here is an excerpt:

The class of professionals best equipped to answer these questions has largely abstained from speaking publicly about the President’s mental health. The principle known as the “Goldwater rule” prohibits psychiatrists from giving professional opinions about public figures without personally conducting an examination, as Jane Mayer wrote in this magazine in May. After losing the 1964 Presidential election, Senator Barry Goldwater successfully sued Fact magazine for defamation after it published a special issue in which psychiatrists declared him “severely paranoid” and “unfit” for the Presidency. For a public figure to prevail in a defamation suit, he must demonstrate that the defendant acted with “actual malice”; a key piece of evidence in the Goldwater case was Fact’s disregard of a letter from the American Psychiatric Association warning that any survey of psychiatrists who hadn’t clinically examined Goldwater was invalid.

The Supreme Court denied Fact’s cert petition, which hoped to vindicate First Amendment rights to free speech and a free press. But Justice Hugo Black, joined by William O. Douglas, dissented, writing, “The public has an unqualified right to have the character and fitness of anyone who aspires to the Presidency held up for the closest scrutiny. Extravagant, reckless statements and even claims which may not be true seem to me an inevitable and perhaps essential part of the process by which the voting public informs itself of the qualities of a man who would be President.”

These statements, of course, resonate today. President Trump has unsuccessfully pursued many defamation lawsuits over the years, leading him to vow during the 2016 campaign to “open up our libel laws so when they write purposely negative and horrible and false articles, we can sue them and win lots of money.” (One of his most recent suits, dismissed in 2016, concerned a Univision executive’s social-media posting of side-by-side photos of Trump and Dylann Roof, the white supremacist who murdered nine black churchgoers in Charleston, South Carolina, in 2015; Trump alleged that the posting falsely accused him of inciting similar acts.)

The article is here.

Super-intelligence and eternal life

Transhumanism’s faithful follow it blindly into a future for the elite

Alexander Thomas
The Conversation
First published July 31, 2017

The rapid development of so-called NBIC technologies – nanotechnology, biotechnology, information technology and cognitive science – are giving rise to possibilities that have long been the domain of science fiction. Disease, ageing and even death are all human realities that these technologies seek to end.

They may enable us to enjoy greater “morphological freedom” – we could take on new forms through prosthetics or genetic engineering. Or advance our cognitive capacities. We could use brain-computer interfaces to link us to advanced artificial intelligence (AI).

Nanobots could roam our bloodstream to monitor our health and enhance our emotional propensities for joy, love or other emotions. Advances in one area often raise new possibilities in others, and this “convergence” may bring about radical changes to our world in the near-future.

“Transhumanism” is the idea that humans should transcend their current natural state and limitations through the use of technology – that we should embrace self-directed human evolution. If the history of technological progress can be seen as humankind’s attempt to tame nature to better serve its needs, transhumanism is the logical continuation: the revision of humankind’s nature to better serve its fantasies.

The article is here.

Saturday, August 26, 2017

Liars, Damned Liars, and Zealots: The Effect of Moral Mandates on Transgressive Advocacy Acceptance

Allison B. Mueller, Linda J. Skitka
Social Psychological and Personality Science 
First published date: July-25-2017

Abstract

This research explored people’s reactions to targets who “went too far” to support noble causes. We hypothesized that observers’ moral mandates would shape their perceptions of others’ advocacy, even when that advocacy was transgressive, that is, when it used norm-violating means (i.e., lying) to achieve a preferred end. Observers were expected to accept others’ advocacy, independent of its credibility, to a greater extent when it bolstered their strong (vs. weak) moral mandate. Conversely, observers with strong (vs. weak) moral conviction for the cause were expected to condemn others’ advocacy—independent of its credibility—to a greater degree when it represented progress for moral opponents. Results supported these predictions. When evaluating a target in a persuasive communication setting, people’s judgments were uniquely shaped by the degree to which the target bolstered or undermined a cherished moral mandate.

Here is part of the Discussion Section:

These findings expand our knowledge of the moral mandate effect in two key ways. First, this work suggests that the moral mandate effect extends to specific individuals, not just institutions and authorities. Moral mandates may shape people’s perceptions of any target who engages in norm-violating behaviors that uphold moralized causes: co-workers, politicians, or CEOs. Second, this research suggests that, although people are not comfortable excusing others for heinous crimes that serve a moralized end (Mullen & Skitka, 2006), they appear comparatively tolerant of norm violations like lying.

A troubling and timely implication of these findings is that political figures may be able to act in corrupt ways without damaging their images (at least in the eyes of their supporters).

The article is here.

Friday, August 25, 2017

A philosopher who studies life changes says our biggest decisions can never be rational

Olivia Goldhill
Quartz.com
Originally published August 13, 2017

At some point, everyone reaches a crossroads in life: Do you decide to take that job and move to a new country, or stay put? Should you become a parent, or continue your life unencumbered by the needs of children?

Instinctively, we try to make these decisions by projecting ourselves into the future, trying to imagine which choice will make us happier. Perhaps we seek counsel or weigh up evidence. We might write out a pro/con list. What we are doing, ultimately, is trying to figure out whether or not we will be better off working for a new boss and living in Morocco, say, or raising three beautiful children.

This is fundamentally impossible, though, says philosopher L.A. Paul at the University of North Carolina at Chapel Hill, a pioneer in the philosophical study of transformative experiences. Certain life choices are so significant that they change who we are. Before undertaking those choices, we are unable to evaluate them from the perspective and values of our future, changed selves. In other words, your present self cannot know whether your future self will enjoy being a parent or not.

The article is here.

What are the ethical consequences of immortality technology?

Francesca Minerva and Adrian Rorheim
aeon.com
First published August 8, 2017

Immortality has gone secular. Unhooked from the realm of gods and angels, it’s now the subject of serious investment – both intellectual and financial – by philosophers, scientists and the Silicon Valley set. Several hundred people have already chosen to be ‘cryopreserved’ in preference to simply dying, as they wait for science to catch up and give them a second shot at life. But if we treat death as a problem, what are the ethical implications of the highly speculative ‘solutions’ being mooted?

Of course, we don’t currently have the means of achieving human immortality, nor is it clear that we ever will. But two hypothetical options have so far attracted the most interest and attention: rejuvenation technology, and mind uploading.

Like a futuristic fountain of youth, rejuvenation promises to remove and reverse the damage of ageing at the cellular level. Gerontologists such as Aubrey de Grey argue that growing old is a disease that we can circumvent by having our cells replaced or repaired at regular intervals. Practically speaking, this might mean that every few years, you would visit a rejuvenation clinic. Doctors would not only remove infected, cancerous or otherwise unhealthy cells, but also induce healthy ones to regenerate more effectively and remove accumulated waste products. This deep makeover would ‘turn back the clock’ on your body, leaving you physiologically younger than your actual age. You would, however, remain just as vulnerable to death from acute trauma – that is, from injury and poisoning, whether accidental or not – as you were before.

Thursday, August 24, 2017

China's Plan for World Domination in AI Isn't So Crazy After All

Mark Bergen and David Ramli
Bloomberg.com
First published August 14, 2017

Here is an excerpt:

Xu runs SenseTime Group Ltd., which makes artificial intelligence software that recognizes objects and faces, and counts China’s biggest smartphone brands as customers. In July, SenseTime raised $410 million, a sum it said was the largest single round for an AI company to date. That feat may soon be topped, probably by another startup in China.

The nation is betting heavily on AI. Money is pouring in from China’s investors, big internet companies and its government, driven by a belief that the technology can remake entire sectors of the economy, as well as national security. A similar effort is underway in the U.S., but in this new global arms race, China has three advantages: A vast pool of engineers to write the software, a massive base of 751 million internet users to test it on, and most importantly staunch government support that includes handing over gobs of citizens’ data –- something that makes Western officials squirm.

Data is key because that’s how AI engineers train and test algorithms to adapt and learn new skills without human programmers intervening. SenseTime built its video analysis software using footage from the police force in Guangzhou, a southern city of 14 million. Most Chinese mega-cities have set up institutes for AI that include some data-sharing arrangements, according to Xu. "In China, the population is huge, so it’s much easier to collect the data for whatever use-scenarios you need," he said. "When we talk about data resources, really the largest data source is the government."

The article is here.

Brain Augmentation: How Scientists are Working to Create Cyborg Humans with Super Intelligence

Hannah Osborne
Newsweek
Originally published June 14, 2017

For most people, the idea of brain augmentation remains in the realms of science fiction. However, for scientists across the globe, it is fast becoming reality—with the possibility of humans with “super-intelligence” edging ever closer.

In laboratory experiments on rats, researchers have already been able to transfer memories from one brain to another. Future projects include the development of telepathic communication and the creation of “cyborgs,” where humans have advanced abilities thanks to technological interventions.

Scientists Mikhail Lebedev, Ioan Opris and Manuel Casanova have now published a comprehensive collection of research into brain augmentation, and their efforts have won a major European science research prize—the Frontiers Spotlight Award. This $100,000 prize is for the winners to set up a conference that highlights emerging research in their field.

Project leader Lebedev, a senior researcher at Duke University, North Carolina, said the reality of brain augmentation—where intelligence is enhanced by brain implants—will be part of everyday life by 2030, and that “people will have to deal with the reality of this new paradigm.”

Their collection, Augmentation of brain function: facts, fiction and controversy, was published by Frontiers and includes almost 150 research articles by more than 600 contributing authors. It focuses on current brain augmentation, future proposals and the ethical and legal implications the topic raises.

The article is here.

Wednesday, August 23, 2017

Procedural ruling sets higher bar for expert-witness testimony

Brendan Murphy
AMA Wire
Originally posted August 9, 2017

In a procedural decision that could keep so-called junk science out of the courtroom, the District of Columbia Court of Appeals adopted an evidentiary standard that places additional scrutiny on testimony from expert witnesses.

The case at the center of the ruling—Motorola v. Murray—raises the issue of whether cellphones cause brain cancer. In total, 29 cases on the subject matter were brought before the Superior Court for the District of Columbia.

The court did acknowledge isolated strands of scientific data that suggest a possible causal connection between cellphone use and brain cancer. But the court ultimately ruled that based on the research to date, there was inadequate data for any scientist to opine on a causal connection between cellphone use and cancer to any degree of scientific certainty.

In spite of this, the plaintiffs offered their own expert testimony to the contrary, arguing that the jury should determine the validity of the testimony.

The article is here.

Tell it to me straight, doctor: why openness from health experts is vital

Robin Bisson
The Guardian
Originally published August 3, 2017

Here is an excerpt:

It is impossible to overstate the importance of public belief that the medical profession acts in the interests of patients. Any suggestions that public health experts are not being completely open looks at best paternalistic and at worst plays into the hands of those, such as the anti-vaccination lobby, who have warped views about the medical establishment.

So when it comes out that public health messages such as “complete the course” aren’t backed up by evidence, it adds colour to the picture of a paternalistic medical establishment and risks undermining public trust.

Simple public health messages – wear sunscreen, eat five portions of fruit and veg a day – undoubtedly have positive effects on everyone’s health. But people are also capable of understanding nuance and the shifting sands of new evidence. The best way to guarantee people keep trusting experts is for experts to put their trust in people.

The article is here.

Tuesday, August 22, 2017

Jared and Ivanka are failing a basic moral test

Lev Golinkin
CNN.com
Originally published August 20, 2017

Here is an excerpt:

But the silence emanating from Jared and Ivanka was exponentially more powerful than any I'd heard before. To me, as a Jew, seeing nothing but two tweets from Ivanka brought the kind of pain I'm sure is echoed by African-Americans anytime Ben Carson defends the President, and Asian-Americans in the wake of Elaine Chao's and Nikki Haley's equivocations: condemning hate in general terms while carefully avoiding criticizing the very administration they're part of.

No press conference was forthcoming, no rejection of Donald Trump's words; there was no statement from Jared about the horror his grandparents had survived; nothing from Ivanka, who had spoken about standing up for mothers on the campaign trail, about defending today's Jewish children -- her children; indeed all children -- from intimidation and violence. There was nothing, but the sound of steady clicking on Ivanka's electronic device as she wrote two tweets.

It was like listening to the fabric of Judaism tear at itself.

Beneath Jewish rituals, customs and rules lies a simple and sacred idea: preserving the sanctity of life. It's why the ill are absolved from fasting on days of penance. It's why the no-electricity, no-work rules of Shabbat go out the window the moment a life-threatening emergency hits. In fact, it's considered a grave sin to put someone at risk by blindly keeping the Sabbath, for it places righteousness above humanity. This ethical focus on preserving life is the substrate of Judaism, first, last and always.

The opinion piece is here.

Informed-consent ruling may have “far-reaching, negative impact”

Andis Robeznieks
AMA Wire
Originally published August 8, 2017

Here are two excerpts:

A lawsuit alleging Dr. Toms had not obtained informed consent was initiated by Shinal and her husband on Dec. 17, 2009. The brief notes that Shinal “did not assert that the harm was the result of negligence” and that “there is no contention” that Dr. Toms’ staff provided inaccurate information during the informed consent process.

A jury found for Dr. Toms. Shinal appealed and the Pennsylvania Superior Court affirmed the decision. The case was heard before the Pennsylvania Supreme Court in November 2016. The case was decided June 20.

According to Wecht, a key issue is “whether the trial court misapplied the common law and the MCARE Act when it instructed the jury that it could consider information provided to Mrs. Shinal by Dr. Toms' ‘qualified staff’ in deciding whether Dr. Toms obtained Mrs. Shinal's informed consent to aggressive brain surgery.”

(cut)

PAMED General Counsel Angela Boateng also weighed in.

“It was not uncommon for other qualified staff to assist a physician in providing the requisite information or answering follow-up questions a patient may have had. The Medical Practice Act and other professional regulations permitted this level of assistance,” she commented. “The patient’s ability to follow up with the physician or his qualified staff was usually aimed at promoting a patient’s understanding of the treatment or procedure to be completed. The court’s decision, however, has put an end to this practice.”

The article is here.

Monday, August 21, 2017

Burnout at Work Isn’t Just About Exhaustion. It’s Also About Loneliness

Emma Seppala and Marissa King
Harvard Business Review
First published June 29, 2017

More and more people are feeling tired and lonely at work. In analyzing the General Social Survey of 2016, we found that, compared with roughly 20 years ago, people are twice as likely to report that they are always exhausted. Close to 50% of people say they are often or always exhausted due to work. This is a shockingly high statistic — and it’s a 32% increase from two decades ago. What’s more, there is a significant correlation between feeling lonely and work exhaustion: The more people are exhausted, the lonelier they feel.

This loneliness is not a result of social isolation, as you might think, but rather is due to the emotional exhaustion of workplace burnout. In researching the book The Happiness Track, we found that 50% of people — across professions, from the nonprofit sector to the medical field — are burned out. This isn’t just a problem for busy, overworked executives (though the high rates of loneliness and burnout among this group are well known). Our work suggests that the problem is pervasive across professions and up and down corporate hierarchies.

Loneliness, whether it results from social isolation or exhaustion, has serious consequences for individuals. John Cacioppo, a leading expert on loneliness and coauthor of Loneliness: Human Nature and the Need for Social Connection, emphasizes its tremendous impact on psychological and physical health and longevity. Research by Sarah Pressman, of the University of California, Irvine, corroborates his work and demonstrates that while obesity reduces longevity by 20%, drinking by 30%, and smoking by 50%, loneliness reduces it by a whopping 70%. In fact, one study suggests that loneliness increases your chance of stroke or coronary heart disease — the leading cause of death in developed countries — by 30%. On the other hand, feelings of social connection can strengthen our immune system, lengthen our life, and lower rates of anxiety and depression.

Tracking retractions as a window into the scientific process: Publisher won’t retract two papers, despite university’s request

Alison McCook
Retraction Watch
Originally published August 4, 2017

Jens Förster, a high-profile social psychologist, has agreed to retract multiple papers following an institutional investigation — but has also fought to keep some papers intact. Recently, one publisher agreed with his appeal, and announced it would not retract two of his papers, despite the recommendation of his former employer.

Last month, the American Psychological Association (APA) announced it would not retract two papers co-authored by Förster, which the University of Amsterdam had recommended for retraction in May, 2015. The APA had followed the university’s advice last year and retracted two other papers, which Förster had agreed to as part of a settlement with the German Society for Psychology (DGPs). But after multiple appeals by Förster and his co-authors, the publisher has decided to retain the papers as part of the scientific record.

The information is here.

Sunday, August 20, 2017

The ethics of creating GMO humans

The Editorial Board
The Los Angeles Times
Originally posted August 3, 2017

Here is an excerpt:

But there is also a great deal we still don’t know about how minor issues might become major ones as people pass on edited DNA to their offspring, and as people who have had some genes altered reproduce with people who have had other genes altered. We’ve seen how selectively breeding to produce one trait can unexpectedly produce other, less desirable outcomes. Remember how growers were able to create tomatoes that were more uniformly red, but in the process, they turned off the gene that gave tomatoes flavor?

Another major issue is the ethics of adjusting humans genetically to fit a favored outcome. Today it’s heritable disease, but what might be seen as undesirable traits in the future that people might want to eliminate? Short stature? Introverted personality? Klutziness?

To be sure, it’s not as though everyone is likely to line up for gene-edited offspring rather than just having babies, at least for the foreseeable future. The procedure can be performed only on in vitro embryos and requires precision timing.

The article is here.

Saturday, August 19, 2017

The role of empathy in experiencing vicarious anxiety

Shu, J., Hassell, S., Weber, J., Ochsner, K. N., & Mobbs, D. (2017).
Journal of Experimental Psychology: General, 146(8), 1164-1188.

Abstract

With depictions of others facing threats common in the media, the experience of vicarious anxiety may be prevalent in the general population. However, the phenomenon of vicarious anxiety—the experience of anxiety in response to observing others expressing anxiety—and the interpersonal mechanisms underlying it have not been fully investigated in prior research. In 4 studies, we investigate the role of empathy in experiencing vicarious anxiety, using film clips depicting target victims facing threats. In Studies 1 and 2, trait emotional empathy was associated with greater self-reported anxiety when observing target victims, and with perceiving greater anxiety to be experienced by the targets. Study 3 extended these findings by demonstrating that trait empathic concern—the tendency to feel concern and compassion for others—was associated with experiencing vicarious anxiety, whereas trait personal distress—the tendency to experience distress in stressful situations—was not. Study 4 manipulated state empathy to establish a causal relationship between empathy and experience of vicarious anxiety. Participants who took an empathic perspective when observing target victims, as compared to those who took an objective perspective using reappraisal-based strategies, reported experiencing greater anxiety, risk-aversion, and sleep disruption the following night. These results highlight the impact of one’s social environment on experiencing anxiety, particularly for those who are highly empathic. In addition, these findings have implications for extending basic models of anxiety to incorporate interpersonal processes, understanding the role of empathy in social learning, and potential applications for therapeutic contexts.

The article is here.

CIA Psychologists Settle Torture Case Acknowledging Abuses

Peter Blumberg and Pamela Maclean
Bloomberg News
Originally published August 17, 2017

Two U.S. psychologists who helped design an overseas CIA interrogation program agreed to settle claims they were responsible for the torture of terrorism suspects, according to the American Civil Liberties Union, which brought the case.

The ACLU called the accord “historic” because it’s the first CIA-linked torture case of its kind that wasn’t dismissed, but said in a statement the terms of the settlement are confidential.

The case, which was set for a U.S. trial starting Sept. 5, focused on alleged abuses in the aftermath of the Sept. 11, 2001, attacks at secret “black-site” facilities that operated under President George W. Bush. The lawsuit followed the 2014 release of a congressional report on Central Intelligence Agency interrogation techniques.

The claims against the psychologists, who worked as government contractors, were filed on behalf of two suspected enemy combatants who were later released and a third who died in custody as a result of hypothermia during his captivity. All three men were interrogated at a site in Afghanistan, according to the ACLU.

ACLU lawyer Dror Ladin has said the case was a novel attempt to use the 1789 Alien Tort Claims Act to fix blame on U.S. citizens for human-rights violations committed abroad, unlike previous cases brought against foreigners.

The article is here.

Friday, August 18, 2017

Psychologists surveyed hundreds of alt-right supporters. The results are unsettling.

Brian Resnick
Vox.com
Originally posted August 15, 2017

Here is an excerpt:

The alt-right scores high on dehumanization measures

One of the starkest, darkest findings in the survey comes from a simple question: How evolved do you think other people are?

Kteily, the co-author on this paper, pioneered this new and disturbing way to measure dehumanization — the tendency to see others as being less than human. He simply shows study participants the following (scientifically inaccurate) image of a human ancestor slowly learning how to stand on two legs and become fully human.

Participants are asked to rate where certain groups fall on this scale from 0 to 100. Zero is not human at all; 100 is fully human.

On average, alt-righters saw other groups as hunched-over proto-humans.

On average, they rated Muslims at a 55.4 (again, out of 100), Democrats at 60.4, black people at 64.7, Mexicans at 67.7, journalists at 58.6, Jews at 73, and feminists at 57. These groups appear as subhumans to those taking the survey. And what about white people? They were scored at a noble 91.8. (You can look through all the data here.)

The article is here.

Trump fails morality test on Charlottesville

John Kass
Chicago Tribune
Originally posted on August 16, 2017

After the deadly violence of Charlottesville, Va., the amoral man in the White House failed his morality test. And in doing so, he gave the left a powerful weapon.

(cut)

So President Trump was faced with a question of morality.

All he had to do was be unequivocal in his condemnation of the alt-right mob.

His brand as an alpha in a sea of political beta males promised he wouldn't be equivocal about anything.

But he failed, miserably, his mouth and tongue transformed into a dollop of lukewarm tapioca, talking in equivocal terms, about the violence on "many sides."

He then he offered another statement, ostensibly to clarify and condemn the mob. But that was followed, predictably, by even more comments, as he desperately tried to publicly litigate his earlier failures.

In doing so, he gave the alt-right all they could dream of.

He said some attending the rally were "fine people."

Fine people don't go to white supremacist rallies to spew hate. Fine people don't remotely associate with the KKK. Fine people at a protest see men in white hoods and leave.

Fine people don't get in a car and in a murderous rage, run others down, including Heather Heyer, who in her death has become a saint of the left.

The article is here.

Thursday, August 17, 2017

Donald Trump has a very clear attitude about morality: He doesn't believe in it

John Harwood | @johnjharwood
CNBC
Originally published August 16, 2017

The more President Donald Trump reveals his character, the more he isolates himself from the American mainstream.

In a raucous press conference this afternoon, the president again blamed "both sides" for deadly violence in Charlottesville. He equated "Unite the Right" protesters — a collection including white supremacists, neo-Nazis and ex-KKK leader David Duke — with protesters who showed up to counter them.

Earlier he targeted business leaders — specifically, executives from Merck, Under Armour, Intel, and the Alliance for American Manufacturing — who had quit a White House advisory panel over Trump's message. In a tweet, the president called them "grandstanders."

That brought two related conclusions into focus. The president does not share the instinctive moral revulsion most Americans feel toward white supremacists and neo-Nazis. And he feels contempt for those — like the executives — who are motivated to express that revulsion at his expense.

No belief in others' morality

Trump has displayed this character trait repeatedly. It combines indifference to conventional notions of morality or propriety with disbelief that others would be motivated by them.

He dismissed suggestions that it was inappropriate for his son and campaign manager to have met with Russians offering dirt on Hillary Clinton during the presidential campaign. "Most people would have taken the meeting," he said. "Politics isn't the nicest business."

The article is here.

New Technology Standards Guide Social Work Practice and Education

Susan A. Knight
Social Work Today
Vol. 17 No. 4 P. 10

Today's technological landscape is vastly different from what it was just 10 to 15 years ago. Smartphones have replaced home landlines. Texting has become an accepted form of communication, both personally and professionally. Across sectors—health and human services, education, government, and business—employees conduct all manner of work on tablets and other portable devices. Along with "liking" posts on Facebook, people are tracking hashtags on Twitter, sending messages via Snapchat, and pinning pictures to Pinterest.

To top it all off, it seems that there's always a fresh controversy emerging because someone shared something questionable on a social media platform for the general public to see and critique.

Like every other field, social work practice is dealing with issues, challenges, and risks that were previously nonexistent. The NASW and Association of Social Work Boards (ASWB) Standards for Technology and Social Work Practice, dating back to 2005, was in desperate need of a rework in order to address all the changes and complexities within the technological environment that social workers are forced to contend with.

The newly released updated standards are the result of a collaborative effort between four major social work organizations: NASW, ASWB, the Clinical Social Work Association (CSWA), and the Council on Social Work Education (CSWE). "The intercollaboration in the development of the technology standards provides one consensus product and resource for social workers to refer to," says Mirean Coleman, MSW, LICSW, CT, clinical manager of NASW.

The article is here.

Wednesday, August 16, 2017

Learning morality through gaming

Jordan Erica Webber
The Guardian
Originally published 13 August 2017

Here is an excerpt:

Whether or not you agree with Snowden’s actions, the idea that playing video games could affect a person’s ethical position or even encourage any kind of philosophical thought is probably surprising. Yet we’re used to the notion that a person’s thinking could be influenced by the characters and conundrums in books, film and television; why not games? In fact, games have one big advantage that makes them especially useful for exploring philosophical ideas: they’re interactive.

As any student of philosophy will tell you, one of the primary ways of engaging with abstract questions is through thought experiments. Is Schrödinger’s cat dead or alive? Would you kill one person to save five? A thought experiment presents an imagined scenario (often because it wouldn’t be viable to perform the experiment in real life) to test intuitions about the consequences.

Video games, too, are made up of counterfactual narratives that test the player: here is a scenario, what would you do? Unlike books, film and television, games allow you to act on your intuition. Can you kill a character you’ve grown to know over hours of play, if it would save others?

The article is here.

What Does Patient Autonomy Mean for Doctors and Drug Makers?

Christina Sandefur
The Conversation
Originally published July 26, 2017

Here is an excerpt:

Although Bateman-House fears that deferring to patients comes at the expense of physician autonomy, she also laments that physicians currently abuse the freedom they have, failing to spend enough time with their patients, which she says undermines a patient’s ability to make informed medical decisions.

Even if it’s true that physician consultations aren’t as thorough as they once were, patients today have better access to health care information than ever before. According to the Pew Research Center, two-thirds of U.S. adults have broadband internet in their homes, and 13 percent who lack it can access the internet through a smartphone. Pew reports that more than half of adult internet users go online to get information on medical conditions, 43 percent on treatments, and 16 percent on drug safety. Yet despite their desire to research these issues online, 70 percent still sought out additional information from a doctor or other professional.

In other words, people are making greater efforts to learn about health care on their own. True, not all such information on the internet is accurate. But encouraging patients to seek out information from multiple sources is a good thing. In fact, requiring government approval of treatments may lull patients into a false sense of security. As Connor Boyack, president of the Libertas Institute, points out, “Instead of doing their own due diligence and research, the overwhelming majority of people simply concern themselves with whether or not the FDA says a certain product is okay to use.” But blind reliance on a government bureaucracy is rarely a good idea.

The article can be found here.

Tuesday, August 15, 2017

The ethical argument against philanthropy

Olivia Goldhill
Quartz
Originally posted July 22, 2017

Exceptionally wealthy people aren’t a likeable demographic, but they have an easy way to boost personal appeal: Become an exceptionally wealthy philanthropist. When the rich use their money to support a good cause, we’re compelled to compliment their generosity and praise their selfless work.

This is entirely the wrong response, according to Rob Reich, director of the Center for Ethics in Society at Stanford University.

Big philanthropy is, he says, “the odd encouragement of a plutocratic voice in a democratic society.” By offering philanthropists nothing but gratitude, we allow a huge amount of power to go unchecked. “Philanthropy, if you define it as the deployment of private wealth for some public influence, is an exercise of power. In a democratic society, power deserves scrutiny,” he adds.

A philanthropic foundation is a form of unaccountable power quite unlike any other organization in society. Government is at least somewhat beholden to voters, and private companies must contend with marketplace competition and the demands of shareholders.

But until the day that government services alleviate all human need, perhaps we should be willing to overlook the power dynamics of philanthropy—after all, surely charity in unchecked form is better than nothing?

The article is here.

Inferences about moral character moderate the impact of consequences on blame and praise

Jenifer Z. Siegel, Molly J.Crockett, and Raymond J. Dolan
Cognition
Volume 167, October 2017, Pages 201-211

Abstract

Moral psychology research has highlighted several factors critical for evaluating the morality of another’s choice, including the detection of norm-violating outcomes, the extent to which an agent caused an outcome, and the extent to which the agent intended good or bad consequences, as inferred from observing their decisions. However, person-centered accounts of moral judgment suggest that a motivation to infer the moral character of others can itself impact on an evaluation of their choices. Building on this person-centered account, we examine whether inferences about agents’ moral character shape the sensitivity of moral judgments to the consequences of agents’ choices, and agents’ role in the causation of those consequences. Participants observed and judged sequences of decisions made by agents who were either bad or good, where each decision entailed a trade-off between personal profit and pain for an anonymous victim. Across trials we manipulated the magnitude of profit and pain resulting from the agent’s decision (consequences), and whether the outcome was caused via action or inaction (causation). Consistent with previous findings, we found that moral judgments were sensitive to consequences and causation. Furthermore, we show that the inferred character of an agent moderated the extent to which people were sensitive to consequences in their moral judgments. Specifically, participants were more sensitive to the magnitude of consequences in judgments of bad agents’ choices relative to good agents’ choices. We discuss and interpret these findings within a theoretical framework that views moral judgment as a dynamic process at the intersection of attention and social cognition.

The article is here.

Monday, August 14, 2017

AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?

Mark Wilson
Co.Design
Originally posted July 14, 2017

Here is an excerpt:

But how could any of this technology actually benefit the world, beyond these theoretical discussions? Would our servers be able to operate more efficiently with bots speaking to one another in shorthand? Could microsecond processes, like algorithmic trading, see some reasonable increase? Chatting with Facebook, and various experts, I couldn’t get a firm answer.

However, as paradoxical as this might sound, we might see big gains in such software better understanding our intent. While two computers speaking their own language might be more opaque, an algorithm predisposed to learn new languages might chew through strange new data we feed it more effectively. For example, one researcher recently tried to teach a neural net to create new colors and name them. It was terrible at it, generating names like Sudden Pine and Clear Paste (that clear paste, by the way, was labeled on a light green). But then they made a simple change to the data they were feeding the machine to train it. They made everything lowercase–because lowercase and uppercase letters were confusing it. Suddenly, the color-creating AI was working, well, pretty well! And for whatever reason, it preferred, and performed better, with RGB values as opposed to other numerical color codes.

Why did these simple data changes matter? Basically, the researcher did a better job at speaking the computer’s language. As one coder put it to me, “Getting the data into a format that makes sense for machine learning is a huge undertaking right now and is more art than science. English is a very convoluted and complicated language and not at all amicable for machine learning.”

The article is here.

Moral alchemy: How love changes norms

Rachel W. Magid and Laura E.Schulz
Cognition
Volume 167, October 2017, Pages 135-150

Abstract

We discuss a process by which non-moral concerns (that is concerns agreed to be non-moral within a particular cultural context) can take on moral content. We refer to this phenomenon as moral alchemy and suggest that it arises because moral obligations of care entail recursively valuing loved ones’ values, thus allowing propositions with no moral weight in themselves to become morally charged. Within this framework, we predict that when people believe a loved one cares about a behavior more than they do themselves, the moral imperative to care about the loved one’s interests will raise the value of that behavior, such that people will be more likely to infer that third parties will see the behavior as wrong (Experiment 1) and the behavior itself as more morally important (Experiment 2) than when the same behaviors are considered outside the context of a caring relationship. The current study confirmed these predictions.

The article is here.

Sunday, August 13, 2017

Ethical and legal considerations in psychobiography

Jason D Reynolds and Taewon Choi
American Psychologist 2017 Jul-Aug;72(5):446-458

Abstract

Despite psychobiography's long-standing history in the field of psychology, there has been relatively little discussion of ethical issues and guidelines in psychobiographical research. The Ethics Code of the American Psychological Association (APA) does not address psychobiography. The present article highlights the value of psychobiography to psychology, reviews the history and current status of psychobiography in the field, examines the relevance of existing APA General Principles and Ethical Standards to psychobiographical research, and introduces a best practice ethical decision-making model to assist psychologists working in psychobiography. Given the potential impact of psychologists' evaluative judgments on other professionals and the lay public, it is emphasized that psychologists and other mental health professionals have a high standard of ethical vigilance in conducting and reporting psychobiography.

The article is here.

Saturday, August 12, 2017

Reminder: the Trump International Hotel is still an ethics disaster

Carly Sitrin
Vox.com
Originally published August 8, 2017

The Trump International Hotel in Washington, DC, has been serving as a White House extension since Donald Trump took office, and experts think this violates several governmental ethics rules.

The Washington Post reported Monday that the Trump International Hotel has played host to countless foreign dignitaries, Republican lawmakers, and powerful actors hoping to hold court with Trump appointees or even the president himself.

Since visitation records at the Trump International Hotel are not made public, the Post sent reporters to the hotel every day in May to try to identify people and organizations using the facilities.

What they found was a revolving door of powerful people holding galas in the hotel’s lavish ballrooms and meeting over expensive cocktails with White House staff at the bar.

They included Rep. Dana Rohrabacher (R-CA), whom Politico recently called "Putin’s favorite congressman”; Rep. Bill Shuster (R-PA), who chairs the General Services Administration, the Trump hotel's landlord; and nine other Republican Congress members who all hosted events at the hotel, according to campaign spending disclosures obtained by the Post. Additionally, foreign visitors such as business groups promoting Turkish-American relations and the Romanian President Klaus Iohannis and his wife also rented out rooms.

The article is here.

Friday, August 11, 2017

What an artificial intelligence researcher fears about AI

Arend Hintze
TechXplore.com
Originally published July 14, 2017

Here is an excerpt:

Fear of the nightmare scenario

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankind's existence in it probably doesn't matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time – somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldn't just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things – as are saying we want to save the planet and successfully doing so.

The article is here.

The real problem (of consciousness)

Anil K Seth
Aeon.com
Originally posted November 2, 2016

Here is an excerpt:

The classical view of perception is that the brain processes sensory information in a bottom-up or ‘outside-in’ direction: sensory signals enter through receptors (for example, the retina) and then progress deeper into the brain, with each stage recruiting increasingly sophisticated and abstract processing. In this view, the perceptual ‘heavy-lifting’ is done by these bottom-up connections. The Helmholtzian view inverts this framework, proposing that signals flowing into the brain from the outside world convey only prediction errors – the differences between what the brain expects and what it receives. Perceptual content is carried by perceptual predictions flowing in the opposite (top-down) direction, from deep inside the brain out towards the sensory surfaces. Perception involves the minimisation of prediction error simultaneously across many levels of processing within the brain’s sensory systems, by continuously updating the brain’s predictions. In this view, which is often called ‘predictive coding’ or ‘predictive processing’, perception is a controlled hallucination, in which the brain’s hypotheses are continually reined in by sensory signals arriving from the world and the body. ‘A fantasy that coincides with reality,’ as the psychologist Chris Frith eloquently put it in Making Up the Mind (2007).

Armed with this theory of perception, we can return to consciousness. Now, instead of asking which brain regions correlate with conscious (versus unconscious) perception, we can ask: which aspects of predictive perception go along with consciousness? A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. In 2001, Alvaro Pascual-Leone and Vincent Walsh at Harvard Medical School asked people to report the perceived direction of movement of clouds of drifting dots (so-called ‘random dot kinematograms’). They used TMS to specifically interrupt top-down signalling across the visual cortex, and they found that this abolished conscious perception of the motion, even though bottom-up signals were left intact.

The article is here.

Thursday, August 10, 2017

Predatory Journals Hit By ‘Star Wars’ Sting

By Neuroskeptic
discovermagazine.com
Originally published July 19, 2017

A number of so-called scientific journals have accepted a Star Wars-themed spoof paper. The manuscript is an absurd mess of factual errors, plagiarism and movie quotes. I know because I wrote it.

Inspired by previous publishing “stings”, I wanted to test whether ‘predatory‘ journals would publish an obviously absurd paper. So I created a spoof manuscript about “midi-chlorians” – the fictional entities which live inside cells and give Jedi their powers in Star Wars. I filled it with other references to the galaxy far, far away, and submitted it to nine journals under the names of Dr Lucas McGeorge and Dr Annette Kin.

Four journals fell for the sting. The American Journal of Medical and Biological Research (SciEP) accepted the paper, but asked for a $360 fee, which I didn’t pay. Amazingly, three other journals not only accepted but actually published the spoof. Here’s the paper from the International Journal of Molecular Biology: Open Access (MedCrave), Austin Journal of Pharmacology and Therapeutics (Austin) and American Research Journal of Biosciences (ARJ) I hadn’t expected this, as all those journals charge publication fees, but I never paid them a penny.

The blog post is here.

Wednesday, August 9, 2017

Career of the Future: Robot Psychologist

Christopher Mims
The Wall Street Journal
Originally published July 9, 2017

Artificial-intelligence engineers have a problem: They often don’t know what their creations are thinking.

As artificial intelligence grows in complexity and prevalence, it also grows more powerful. AI already has factored into decisions about who goes to jail and who receives a loan. There are suggestions AI should determine who gets the best chance to live when a self-driving car faces an unavoidable crash.

Defining AI is slippery and growing more so, as startups slather the buzzword over whatever they are doing. It is generally accepted as any attempt to ape human intelligence and abilities.

One subset that has taken off is neural networks, systems that “learn” as humans do through training, turning experience into networks of simulated neurons. The result isn’t code, but an unreadable, tangled mass of millions—in some cases billions—of artificial neurons, which explains why those who create modern AIs can be befuddled as to how they solve tasks.

Most researchers agree the challenge of understanding AI is pressing. If we don’t know how an artificial mind works, how can we ascertain its biases or predict its mistakes?

We won’t know in advance if an AI is racist, or what unexpected thought patterns it might have that would make it crash an autonomous vehicle. We might not know about an AI’s biases until long after it has made countless decisions. It’s important to know when an AI will fail or behave unexpectedly—when it might tell us, “I’m sorry, Dave. I’m afraid I can’t do that.”

“A big problem is people treat AI or machine learning as being very neutral,” said Tracy Chou, a software engineer who worked with machine learning at Pinterest Inc. “And a lot of that is people not understanding that it’s humans who design these models and humans who choose the data they are trained on.”

The article is here.

Tuesday, August 8, 2017

The next big corporate trend? Actually having ethics.

Patrick Quinlan
Recode.net
Originally published July 20, 2017

Here is an excerpt:

Slowly, brands are waking up to the fact that strong ethics and core values are no longer a “nice to have,” but a necessity. Failure to take responsibility in times of crisis can take an irreparable toll on the trust companies have worked so hard to build with employees, partners and customers. So many brands are still getting it wrong, and the consequences are real — public boycotting, massive fines, fired CEOs and falling stock prices.

This shift is what I call ethical transformation — the application of ethics and values across all aspects of business and society. It’s as impactful and critical as digital transformation, the other megatrend of the last 20 years. You can’t have one without the other. The internet stripped away barriers between consumers and brands, meaning that transparency and attention to ethics and values is at an all-time high. Brands have to get on board, now. Consider some oft-cited casualties of the digital transformation: Blockbuster, Kodak and Sears. That same fate awaits companies that can’t or won’t prioritize ethics and values.

This is a good thing. Ethical transformation pushes us into a better future, one built on genuinely ethical companies. But it’s not easy. In fact, it’s pretty hard. And it takes time. For decades, most of the business world focused on what not to do or how not to get fined. (In a word: Compliance.) Every so often, ethics and its even murkier brother “values” got a little love as an afterthought. Brands that did focus on values and ethics were considered exceptions to the rule — the USAAs and Toms shoes of the world. No longer.

The article is here.

Monday, August 7, 2017

Study suggests why more skin in the game won't fix Medicaid

Don Sapatkin
Philly.com
Originally posted July 19, 2017

Here is an excerpt:

Previous studies have found that increasing cost-sharing causes consumers to skip medical care somewhat indiscriminately. The Dutch research was the first to examine the impact of cost-sharing changes on specialty mental health-care, the authors wrote.

Jalpa A. Doshi, a researcher at the University of Pennsylvania’s Leonard Davis Institute of Health Economics, has examined how Americans with commercial insurance respond to cost-sharing for antidepressants.

“Because Medicaid is the largest insurer of low-income individuals with serious mental illnesses such as schizophrenia and bipolar disorder in the United States, lawmakers should be cautious on whether an increase in cost sharing for such a vulnerable group may be a penny-wise, pound-foolish policy,” Doshi said in an email after reading the new study.

Michael Brody, president and CEO of Mental Health Partnerships, formerly the Mental Health Association of Southeastern Pennsylvania, had an even stronger reaction about the possible implications for Medicaid patients.

The article is here.

Attributing Agency to Automated Systems: Reflectionson Human–Robot Collaborations and Responsibility-Loci

Sven Nyholm
Science and Engineering Ethics
pp 1–19

Many ethicists writing about automated systems (e.g. self-driving cars and autonomous weapons systems) attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human–robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think.

The article is here.

Sunday, August 6, 2017

An erosion of ethics oversight should make us all more cynical about Trump

The Editorial Board
The Los Angeles Times
Originally published August 4, 2017

President Trump’s problems with ethics are manifest, from his refusal to make public his tax returns to the conflicts posed by his continued stake in the Trump Organization and its properties around the world — including the Trump International Hotel just down the street from the White House, in a building leased from the federal government he’s now in charge of. The president’s stubborn refusal to hew to the ethical norms set by his predecessors has left the nation to rightfully question whose best interests are foremost in his mind.

Some of the more persistent challenges to the Trump administration’s comportment have come from the Office of Government Ethics, whose recently departed director, Walter M. Shaub Jr., fought with the administration frequently over federal conflict-of-interest regulations. Under agency rules, chief of staff Shelley K. Finlayson should have been Shaub’s successor until the president nominated a new director, who would need Senate confirmation.

But Trump upended that transition last month by naming the office’s general counsel, David J. Apol, as the interim director. Apol has a reputation within the agency for taking contrarian — and usually more lenient — stances on ethics requirements than did Shaub and the consensus opinion of the staff (including Finlayson). And that, of course, raises the question of whether the White House replaced Finlayson with Apol in hopes of having a more conciliatory ethics chief without enduring a grueling nomination fight.

The article is here.

Saturday, August 5, 2017

Empathy makes us immoral

Olivia Goldhill
Quartz
Originally published July 9, 2017

Empathy, in general, has an excellent reputation. But it leads us to make terrible decisions, according to Paul Bloom, psychology professor at Yale and author of Against Empathy: The Case for Rational Compassion. In fact, he argues, we would be far more moral if we had no empathy at all.

Though it sounds counterintuitive, Bloom makes a convincing case. First, he makes a point of defining empathy as putting yourself in the shoes of other people—“feeling their pain, seeing the world through their eyes.” When we rely on empathy to make moral decisions, he says, we end up prioritizing the person whose suffering we can easily relate to over that of any number of others who seem more distant. Indeed, studies have shown that empathy does encourage irrational moral decisions that favor one individual over the masses.

“When we rely on empathy, we think that a little girl stuck down a well is more important than all of climate change, is more important than tens of thousands of people dying in a far away country,” says Bloom. “Empathy zooms us in on the attractive, on the young, on people of the same race. It zooms us in on the one rather than the many. And so it distorts our priorities.”

The article is here.

Friday, August 4, 2017

Moral distress in physicians and nurses: Impact on professional quality of life and turnover

Austin, Cindy L.; Saylor, Robert; Finley, Phillip J.
Psychological Trauma: Theory, Research, Practice, and Policy, Vol 9(4), Jul 2017, 399-406.

Abstract

Objective: The purpose of this study was to investigate moral distress (MD) and turnover intent as related to professional quality of life in physicians and nurses at a tertiary care hospital.

Method: Health care providers from a variety of hospital departments anonymously completed 2 validated questionnaires (Moral Distress Scale–Revised and Professional Quality of Life Scale). Compassion fatigue (as measured by secondary traumatic stress [STS] and burnout [BRN]) and compassion satisfaction are subscales which make up one’s professional quality of life. Relationships between these constructs and clinicians’ years in health care, critical care patient load, and professional discipline were explored.

Results: The findings (n = 329) demonstrated significant correlations between STS, BRN, and MD. Scores associated with intentions to leave or stay in a position were indicative of high verses low MD. We report highest scoring situations of MD as well as when physicians and nurses demonstrate to be most at risk for STS, BRN and MD. Both physicians and nurses identified the events contributing to the highest level of MD as being compelled to provide care that seems ineffective and working with a critical care patient load >50%.

Conclusion: The results from this study of physicians and nurses suggest that the presence of MD significantly impacts turnover intent and professional quality of life. Therefore implementation of emotional wellness activities (e.g., empowerment, opportunity for open dialog regarding ethical dilemmas, policy making involvement) coupled with ongoing monitoring and routine assessment of these maladaptive characteristics is warranted.

The article is here.

Re: Nudges in a Post-truth World

Guest Post: Nathan Hodson
Journal of Medical Ethics Blog
Originally posted July 19, 2017

Here is an excerpt:

As Levy notes, some people are concerned that nudges present a threat to autonomy. Attempts at reconciling nudges with ethics, then, are important because nudging in healthcare is here to stay but we need to ensure it is used in ways that respect autonomy (and other moral principles).

The term “nudge” is perhaps a misnomer. To fill out the concept a bit, it commonly denotes the use of behavioural economics and behavioural psychology to the construction of choice architecture through carefully designed trials. But every choice we face, in any context, already comes with a choice architecture: there are endless contextual factors that impact the decisions we make.

When we ask whether nudging is acceptable we are asking whether an arbitrary or random choice architecture is more acceptable than a deliberate choice architecture, or whether an uninformed choice architecture is better than one informed by research.

In fact the permissibility of a nudge derives from whether it is being used in an ethically acceptable way, something that can only be explored on an individual basis. Thaler and Sunstein locate ethical acceptability in promoting the health of the person being nudged (and call this Libertarian Paternalism — i.e. sensible choices are promoted but no option is foreclosed). An alternative approach was proposed by Mitchell: nudges are justified if they maximise future liberty. Either way the nudging itself is not inherently problematic.

The article is here.

Thursday, August 3, 2017

The Trouble With Sex Robots

By Laura Bates
The New York Times
Originally posted

Here is an excerpt:

One of the authors of the Foundation for Responsible Robotics report, Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield, England, said there are ethical arguments within the field about sex robots with “frigid” settings.

“The idea is robots would resist your sexual advances so that you could rape them,” Professor Sharkey said. “Some people say it’s better they rape robots than rape real people. There are other people saying this would just encourage rapists more.”

Like the argument that women-only train compartments are an answer to sexual harassment and assault, the notion that sex robots could reduce rape is deeply flawed. It suggests that male violence against women is innate and inevitable, and can be only mitigated, not prevented. This is not only insulting to a vast majority of men, but it also entirely shifts responsibility for dealing with these crimes onto their victims — women, and society at large — while creating impunity for perpetrators.

Rape is not an act of sexual passion. It is a violent crime. We should no more be encouraging rapists to find a supposedly safe outlet for it than we should facilitate murderers by giving them realistic, blood-spurting dummies to stab. Since that suggestion sounds ridiculous, why does the idea of providing sexual abusers with lifelike robotic victims sound feasible to some?

The article is here.

The Wellsprings of Our Morality

Daniel M.T. Fessler
What can evolution tell us about morality?
http://www.humansandnature.org

Mother Nature is amoral, yet morality is universal. The natural world lacks both any guiding hand and any moral compass. And yet all human societies have moral rules, and, with the exception of some individuals suffering from pathology, all people experience profound feelings that shape their actions in light of such rules. Where then did these constellations of rules and feelings come from?

The term “morality” jumbles rules and feelings, as well as judgments of others’ actions that result from the intersection of rules and feelings. Rules, like other features of culture, are ideas transmitted from person to person: “It is laudable to do X,” “It is a sin to do Y,” etc. Feelings are internal states evoked by events, or by thoughts of future possibilities: “I am proud that she did X,” “I am outraged that he did Y,” and so on. Praise or condemnation are social acts, often motivated by feelings, in response to other people’s behavior. All of this is commonly called “morality.”

So, what does it mean to say that morality is universal? You don’t need to be an anthropologist to recognize that, while people everywhere experience strong feelings about others’ behavior—and, as a result, reward or punish that behavior—cultures differ with regard to the beliefs on which they base such judgments. Is injustice a graver sin than disrespect for tradition? Which is more important, the autonomy of the individual or the harmony of the group? The answer is that it depends on whom you ask.

The information is here.

Wednesday, August 2, 2017

Ships in the Rising Sea? Changes Over Time in Psychologists’ Ethical Beliefs and Behaviors

Rebecca A. Schwartz-Mette & David S. Shen-Miller
Ethics & Behavior 

Abstract

Beliefs about the importance of ethical behavior to competent practice have prompted major shifts in psychology ethics over time. Yet few studies examine ethical beliefs and behavior after training, and most comprehensive research is now 30 years old. As such, it is unclear whether shifts in the field have resulted in general improvements in ethical practice: Are we psychologists “ships in the rising sea,” lifted by changes in ethical codes and training over time? Participants (N = 325) completed a survey of ethical beliefs and behaviors (Pope, Tabachnick, & Keith-Spiegel, 1987). Analyses examined group differences, consistency of frequency and ethicality ratings, and comparisons with past data. More than half of behaviors were rated as less ethical and occurring less frequently than in 1987, with early career psychologists generally reporting less ethically questionable behavior. Recommendations for enhancing ethics education are discussed.

The article is here.

A Primatological Perspective on Evolution and Morality

Sarah F. Brosnan
What can evolution tell us about morality?
http://www.humansandnature.org

Morality is a key feature of humanity, but how did we become a moral species? And is morality a uniquely human phenomenon, or do we see its roots in other species? One of the most fun parts of my research is studying the evolutionary basis of behaviors that we think of as quintessentially human, such as morality, to try to understand where they came from and what purpose they serve. In so doing, we can not only better understand why people behave the way that they do, but we also may be able to develop interventions that promote more beneficial decision-making.

Of course, a “quintessentially human” behavior is not replicated, at least in its entirety, in another species, so how does one study the evolutionary history of such behaviors? To do so, we focus on precursor behaviors that are related to the one in question and provide insight into the evolution of the target behavior. A precursor behavior may look very different from the final instantiation; for instance, birds’ wings appear to have originated as feathers that were used for either insulation or advertisement (i.e., sexual selection) that, through a series of intermediate forms, evolved into feathered wings. The chemical definition may be even more apt; a precursor molecule is one that triggers a reaction, resulting in a chemical that is fundamentally different from the initial chemicals used in the reaction.

How is this related to morality? We would not expect to see human morality in other species, as morality implies the ability to debate ethics and develop group rules and norms, which is not possible in non-verbal species. However, complex traits like morality do not arise de novo; like wings, they evolve from existing traits. Therefore, we can look for potential precursors in other species in order to better understand the evolutionary history of morality.

The information is here.

Tuesday, August 1, 2017

Morality isn’t a compass — it’s a calculator

DB Krupp
The Conversation
Originally published July 9, 2017

Here is the conclusion:

Unfortunately, the beliefs that straddle moral fault lines are largely impervious to empirical critique. We simply embrace the evidence that supports our cause and deny the evidence that doesn’t. If strategic thinking motivates belief, and belief motivates reason, then we may be wasting our time trying to persuade the opposition to change their minds.

Instead, we should strive to change the costs and benefits that provoke discord in the first place. Many disagreements are the result of worlds colliding — people with different backgrounds making different assessments of the same situation. By closing the gap between their experiences and by lowering the stakes, we can bring them closer to consensus. This may mean reducing inequality, improving access to health care or increasing contact between unfamiliar groups.

We have little reason to see ourselves as unbiased sources of moral righteousness, but we probably will anyway. The least we can do is minimize that bias a bit.

The article is here.

Henderson psychologist charged with murder can reopen practice

David Ferrara
Las Vegas Review-Journal
Originally posted July 14, 2017

A psychologist accused of killing his wife and staging her death as a suicide can start practicing medicine again in less than four months, the Nevada Board of Psychological Examiners decided Friday.

Suspected of abusing drugs and obtaining prescription drugs from patients, Gregory “Brent” Dennis, who prosecutors say poisoned attorney Susan Winters inside their Henderson home, also must undergo up to seven years of drug treatment, the seven-member panel ruled as they signed a settlement agreement that made no mention of the murder charge.

“It’s clear that the board members do not know what Brent Dennis was arrested for,” Keith Williams, a lawyer for the Winters family, told a Las Vegas Review-Journal reporter after the meeting. “We’re confident that they did not know what they were voting on today.”

Henderson police arrested Dennis on the murder charge in February.

The article is here.