Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, May 31, 2017

4 questions for Paul Bloom

By Lea Winerman
May 2017, Vol 48, No. 5
Print version: page 27

Here is an excerpt:

Why do you believe this kind of empathy is overrated?

I should be clear that I'm not against empathy in general. I think it's a great source of pleasure, for instance, and it plays some role in intimate ­relationships. But when it comes to moral judgments, empathy makes a very poor guide.

One reason is that it's biased. You naturally empathize with people who in some way are part of your circle, who look like you, who maybe share your ethnicity. So, for example, if you base your charitable giving choices on empathy, you find yourself inevitably giving to people who [are like you], and ignoring the plight of thousands, maybe millions of others.

Another problem is that empathy is innumerate. It's a spotlight—you zoom in on one person, as opposed to many. Some people think that this is one of its advantages. But real-world moral decisions involve coping with numbers. They often involve a recognition, for instance, that helping just one person can make lives worse for hundreds or thousands of others. The innumeracy of empathy often leads to paradoxical situations where we're desperate to help a single person—or even a cute puppy—while ignoring crises like climate change, because although millions of people will be affected by it, there's no identifiable victim to zoom in on.

A third problem is that empathy can be weaponized. So, unscrupulous politicians use our empathy for victims of certain crimes to motivate anger and hatred toward other, marginalized, groups. We saw a lot of that in the last election season.

The article is here.

More CEOs Are Getting Fired After an Ethical Lapse, Study Finds

Vanessa Fuhrmans
The Wall Street Journal
Originally posted May 14, 2017

Ethical breaches are causing more chief executives to lose their jobs. The upside? Researchers say the rising numbers don’t point to more corporate misbehavior: It’s that CEOs are being held to a higher level of accountability.

Among the myriad reasons corporate bosses leave their jobs, firings have been on the decline. In a study of CEO exits at the world’s 2,500 largest public companies, researchers at PricewaterhouseCoopers LLP’s strategy consulting arm, called Strategy&, found 20% of CEO exits in the past five years were forced, down from 31% of CEO exits in the previous five years.

But CEO ousters due to ethical lapses—either their own improper conduct, or their employees’—are climbing. Such forced exits rose to 5.3% of CEO departures in the 2012-to-2016 period, up from 3.9% during the previous five years.

The article is here.

Tuesday, May 30, 2017

There’s a Right Way and a Wrong Way to Do Empathy

By Sarah Watts
The Science of Us
Originally published May 18, 2017

Here is an excerpt:

When we talk about empathy, we tend to talk about it as an unqualified good thing. Research has shown that empathy is associated with kindness and helping behaviors, while its absence, clinically referred to as psychopathy, is associated with manipulation and criminal deviance. Empathy, some scientists have concluded, allows us to function well with others and survive as a species.

But what people often don’t talk about is how even a good thing like empathy can still be emotionally draining. Empathic people who easily take on other people’s feelings can spend their days feeling overwhelmed, hurt, and heavyhearted. Empathy, in other words, can be downright stressful. So would it be fair to say that sometimes it’s unhealthy?

A paper published earlier this month in the Journal of Experimental Psychology set out to answer exactly that. According to the authors, there are “two routes” to empathy. The first is imagining how someone else might feel in a given circumstance, called “imagine-other-perspective-taking,” or IOPT. The second is actually imagining yourself in the other person’s situation, called “imagine-self-perspective-taking,” or ISPT. With IOPT, you acknowledge another person’s feelings; with ISPT, you take on that person’s feelings as your own.

The article is here.

Game Theory and Morality

Moshe Hoffman , Erez Yoeli , and Carlos David Navarrete
The Evolution of Morality
Part of the series Evolutionary Psychology pp 289-316

Here is an excerpt:

The key result for evolutionary dynamic models is that, except under extreme conditions, behavior converges to Nash equilibria. This result rests on one simple, noncontroversial assumption shared by all evolutionary dynamics: Behaviors that are relatively successful will increase in frequency. Based on this logic, game theory models have been fruitfully applied in biological contexts to explain phenomena such as animal sex ratios (Fisher, 1958), territoriality (Smith & Price, 1973), cooperation (Trivers, 1971), sexual displays (Zahavi, 1975), and parent–offspring conflict (Trivers, 1974). More recently, evolutionary dynamic models have been applied in human contexts where conscious deliberation is believed to not play an important role, such as in the adoption of religious rituals (Sosis & Alcorta, 2003 ), in the expression and experience of emotion (Frank, 1988 ; Winter, 2014), and in the use of indirect speech (Pinker, Nowak, & Lee, 2008).

 Crucially for this chapter, because our behaviors are mediated by moral intuitions and ideologies, if our moral behaviors converge to Nash, so must the intuitions and ideologies that motivate them. The resulting intuitions and ideologies will bear the signature of their game theoretic origins, and this signature will lend clarity on the puzzling, counterintuitive, and otherwise hard-to-explain features of our moral intuitions, as exemplified by our motivating examples.

In order for game theory to be relevant to understanding our moral intuitions and ideologies, we need only the following simple assumption: Moral intuitions and ideologies that lead to higher payoffs become more frequent. This assumption can be met if moral intuitions that yield higher payoffs are held more tenaciously, are more likely to be imitated, or are genetically encoded. For example, if every time you transgress by commission you are punished, but every time you transgress by omission you are not, you will start to intuit that commission is worse than omission.

The book chapter is here.

Monday, May 29, 2017

Moral Hindsight

Nadine Fleischhut, Björn Meder, & Gerd Gigerenzer
Experimental Psychology (2017), 64, pp. 110-123.

Abstract.

How are judgments in moral dilemmas affected by uncertainty, as opposed to certainty? We tested the predictions of a consequentialist and deontological account using a hindsight paradigm. The key result is a hindsight effect in moral judgment. Participants in foresight, for whom the occurrence of negative side effects was uncertain, judged actions to be morally more permissible than participants in hindsight, who knew that negative side effects occurred. Conversely, when hindsight participants knew that no negative side effects occurred, they judged actions to be more permissible than participants in foresight. The second finding was a classical hindsight effect in probability estimates and a systematic relation between moral judgments and probability estimates. Importantly, while the hindsight effect in probability estimates was always present, a corresponding hindsight effect in moral judgments was only observed among “consequentialist” participants who indicated a cost-benefit trade-off as most important for their moral evaluation.

The article is here.

Sunday, May 28, 2017

CRISPR Makes it Clear: US Needs a Biology Strategy, FAST

Amy Webb
Wired
Originally published

Here is an excerpt:

Crispr can be used to engineer agricultural products like wheat, rice, and animals to withstand the effects of climate change. Seeds can be engineered to produce far greater yields in tiny spaces, while animals can be edited to create triple their usual muscle mass. This could dramatically change global agricultural trade and cause widespread geopolitical destabilization. Or, with advance planning, this technology could help the US forge new alliances.

How comfortable do you feel knowing that there is no group coordinating a national biology strategy in the US, and that a single for-profit company holds a critical mass of intellectual property rights to the future of genomic editing?

While I admire Zheng’s undeniable smarts and creativity, for-profit companies don’t have a mandate to balance the tension between commercial interests and what’s good for humanity; there is no mechanism to ensure that they’ll put our longer-term best interests first.

The article is here.

Saturday, May 27, 2017

Why Do So Many Incompetent Men Become Leaders?

Tomas Chamorro-Premuzic
Harvard Business Review
Originally published August 22, 2013

There are three popular explanations for the clear under-representation of women in management, namely: (1) they are not capable; (2) they are not interested; (3) they are both interested and capable but unable to break the glass-ceiling: an invisible career barrier, based on prejudiced stereotypes, that prevents women from accessing the ranks of power. Conservatives and chauvinists tend to endorse the first; liberals and feminists prefer the third; and those somewhere in the middle are usually drawn to the second. But what if they all missed the big picture?

In my view, the main reason for the uneven management sex ratio is our inability to discern between confidence and competence. That is, because we (people in general) commonly misinterpret displays of confidence as a sign of competence, we are fooled into believing that men are better leaders than women. In other words, when it comes to leadership, the only advantage that men have over women (e.g., from Argentina to Norway and the USA to Japan) is the fact that manifestations of hubris — often masked as charisma or charm — are commonly mistaken for leadership potential, and that these occur much more frequently in men than in women.

The article is here.

Friday, May 26, 2017

What is moral injury in veterans?

Holly Arrow and William Schumacher
The Conversation
Originally posted May 21, 2017

Here is an excerpt:

The moral conflict created by the violations of “what’s right” generates moral injury when the inability to reconcile wartime actions with a personal moral code creates lasting psychological consequences.

Psychiatrist Jonathan Shay, in his work with Vietnam veterans, defined moral injury as the psychological, social and physiological results of a betrayal of “what’s right” by an authority in a high-stakes situation. In “Achilles In Vietnam,” a book that examines the psychological devastation of war, a Vietnam veteran described a situation in which his commanding officers used tear gas on a village after the veteran and his unit had their gas masks rendered ineffective due to water damage. The veteran stated, “They gassed us almost to death.” This type of “friendly fire” incident is morally wounding in a way that attacks by an enemy are not.

Psychologist Brett Litz and his colleagues expanded this to include self-betrayal and identified “perpetrating, failing to prevent, bearing witness to, or learning about acts that transgress deeply held moral beliefs and expectations” as the cause of moral injury.

Guilt and moral injury

A research study published in 1991 identified combat-related guilt as the best predictor of suicide attempts among a sample of Vietnam veterans with PTSD. Details of the veterans’ experiences connected that guilt to morally injurious events.

The article is here.

Do the Right Thing: Preferences for Moral Behavior, Rather than Equity or Efficiency Per Se, Drive Human Prosociality

Capraro, Valerio and Rand, David G.
(May 8, 2017).

Abstract

Decades of experimental research have shown that some people forgo personal gains to benefit others in unilateral one-shot anonymous interactions. To explain these results, behavioral economists typically assume that people have social preferences for minimizing inequality and/or maximizing efficiency (social welfare). Here we present data that are fundamentally incompatible with these standard social preference models. We introduce the “Trade-Off Game” (TOG), where players unilaterally choose between an equitable option and an efficient option. We show that simply changing the labeling of the options to describe the equitable versus efficient option as morally right completely reverses people’s behavior in the TOG. Moreover, people who take the positively framed action, be it equitable or efficient, are more prosocial in a separate Dictator Game (DG) and Prisoner’s Dilemma (PD). Rather than preferences for equity and/or efficiency per se, we propose a generalized morality preference that motivates people to do what they think is morally right. When one option is clearly selfish and the other pro-social (e.g. equitable and/or efficient), as in the DG and PD, the economic outcomes are enough to determine what is morally right. When one option is not clearly more prosocial than the other, as in the TOG, framing resolves the ambiguity about which choice is moral. In addition to explaining our data, this account organizes prior findings that framing impacts cooperation in the standard simultaneous PD, but not in the asynchronous PD or the DG. Thus we present a new framework for understanding the basis of human prosociality.

The paper is here.

Thursday, May 25, 2017

In a moral dilemma, choose the one you love: Impartial actors are seen as less moral than partial ones

Jamie S. Hughes
British Journal of Social Psychology

Abstract

Although impartiality and concern for the greater good are lauded by utilitarian philosophies, it was predicted that when values conflict, those who acted impartially rather than partially would be viewed as less moral. Across four studies, using life-or-death scenarios and more mundane ones, support for the idea that relationship obligations are important in moral attribution was found. In Studies 1–3, participants rated an impartial actor as less morally good and his or her action as less moral compared to a partial actor. Experimental and correlational evidence showed the effect was driven by inferences about an actor's capacity for empathy and compassion. In Study 4, the relationship obligation hypothesis was refined. The data suggested that violations of relationship obligations are perceived as moral as long as strong alternative justifications sanction them. Discussion centres on the importance of relationships in understanding moral attributions.

The article is here.

Emerging technologies: Ethics and morality

Elfren Cruz
The Philippine Star Global
Originally published May 7, 2017

Here is an excerpt:

These emerging technologies will decide the future of humanity because they can be used by the elite class or populists for good or evil. There is no doubt that there will be immense benefits from these new forms of technology. The main issue has been termed as “distributive justice” by some thinkers. This refers to the determination of access to the benefits of technological change.

There are those who believe that the benefits of emerging technologies will worsen the plight of the poor. The World Bank and the International Labor Organization have already warned that millions of jobs will be wiped out by new technologies. As new labor devices are invented, the power of capitalists will grow and the power of labor will diminish. The number of billionaires will increase while the gap between the rich and the poor will continue to widen. Stephen Hawking, the world’s most famous scientist, has even said that artificial intelligence could lead to the extinction of humanity.

By contrast, the optimists believe that emerging technologies, if properly used, could eliminate poverty and abolish suffering. Stuart Russell of UC Berkley said: “Everything we have of value as human beings, as civilization is the result of intelligence and what artificial intelligence ( AI) could do is essentially be a power tool that magnifies human intelligence and gives us the ability to move our civilization forward in all kinds of ways. It might be curing disease, it might be eliminating poverty. I think it certainly should be preventing environmental catastrophe. AI could be instrumental to all those things.

The article is here.

Wednesday, May 24, 2017

Roger Penrose On Why Consciousness Does Not Compute

Steve Paulson
Nautilus
Originally posted May 4, 2017

Here is an excerpt:

As we probed the deeper implications of Penrose’s theory about consciousness, it wasn’t always clear where to draw the line between the scientific and philosophical dimensions of his thinking. Consider, for example, superposition in quantum theory. How could Schrödinger’s cat be both dead and alive before we open the box? “An element of proto-consciousness takes place whenever a decision is made in the universe,” he said. “I’m not talking about the brain. I’m talking about an object which is put into a superposition of two places. Say it’s a speck of dust that you put into two locations at once. Now, in a small fraction of a second, it will become one or the other. Which does it become? Well, that’s a choice. Is it a choice made by the universe? Does the speck of dust make this choice? Maybe it’s a free choice. I have no idea.”

I wondered if Penrose’s theory has any bearing on the long-running philosophical argument between free will and determinism. Many neuroscientists believe decisions are caused by neural processes that aren’t ruled by conscious thought, rendering the whole idea of free will obsolete. But the indeterminacy that’s intrinsic to quantum theory would suggest that causal connections break down in the conscious brain. Is Penrose making the case for free will?

“Not quite, though at this stage, it looks like it,” he said. “It does look like these choices would be random. But free will, is that random?” Like much of his thinking, there’s a “yes, but” here. His claims are provocative, but they’re often provisional. And so it is with his ideas about free will. “I’ve certainly grown up thinking the universe is deterministic. Then I evolved into saying, ‘Well, maybe it’s deterministic but it’s not computable.’ But is it something more subtle than that? Is it several layers deeper? If it’s something we use for our conscious understanding, it’s going to be a lot deeper than even straightforward, non-computable deterministic physics. It’s a kind of delicate borderline between completely deterministic behavior and something which is completely free.”

Ethics office rejects White House attempt to halt inquiry into lobbyists

Associated Press
Originally posted May 23, 2017

Donald Trump’s administration says the government ethics office lacks the authority to force the president to reveal how many waivers he’s granted to ex-lobbyists in his new administration.

Trump’s budget director, Mick Mulvaney, is asking that the office of government ethics (OGE) director, Walter Shaub, halt his inquiry into lobbyists-turned-Trump administration employees. Mulvaney wrote in a letter last week to Shaub: “This data call appears to raise legal questions regarding the scope of OGE’s authorities.”

Shaub fired back Monday that OGE’s request was well within bounds. The ethics director says he expects to see the waiver information within 10 days.

The article is here.

Tuesday, May 23, 2017

Trump moves to block ethics inquiry centered on ex-lobbyists

Brandon Carter
The Hill
Originally published May 22, 2017

The White House is looking to block an effort from the government’s top ethics office to disclose the names of former lobbyists who have been granted waivers to work in the federal government, according to a new report.

The New York Times reports that the White House sent a letter to the head of the Office of Government Ethics (OGE) challenging its legal authority to request that information.

“It is an extraordinary thing,” Walter Shaub Jr., the director of the ethics office, told the Times. “I have never seen anything like it.”

The letter sent by Mick Mulvaney, the head of the Office of Management and Budget, questions whether the ethics office has the authority to demand information regarding ex-lobbyists who are currently working in the federal government.

The article is here.

Psychologist contractors say they were following agency orders

Pamela MacLean
Bloomberg News
Originally posted May 5, 2017

A pair of U.S. psychologists accused of overseeing the torture of terrorism detainees more than a decade ago face reluctance from a federal judge to let them question the CIA’s deputy director to show they were only following orders.

The judge indicated at a hearing Friday that the psychologists should be able defend themselves in the 2015 lawsuit without compromising government secrecy around the exact role Gina Haspel played in the agency’s overseas interrogation program years before she was tapped to be second in command by the Trump administration.

The American Civil Liberties Union, which filed the case on behalf of three ex-prisoners, one of whom died in custody, is urging the judge not to let the psychologists’ lawyers question Haspel and a retired Central Intelligence Agency official. While the defendants want to demonstrate their actions were approved by the agency, the ACLU says that won’t shield them from liability.

The article is here.

Monday, May 22, 2017

Half of US physicians receive industry payments

Michael McCarthy
BMJ 2017; 357

Nearly half of US physicians receive payments from the drug, medical device, and related medical industries, and surgeons and male physicians are more likely to do so, a US study has found.

The study leader, Jona A Hattangadi-Gluth, of the University of California, San Diego, based in La Jolla, said that most payments were relatively small but that many specialists receive more than $10 000 (£7750; $9160) a year from industry, including 11% of orthopedic surgeons, 12% of neurologists, and 13% of neurosurgeons.

She said, “The data suggest that these payments are much more pervasive than we thought and [that] there is much more money going directly to physicians than maybe people recognized.”

The researchers analyzed data from 2015 collected from Open Payments, a program created by the 2010 Affordable Care Act that requires biomedical manufacturers and group purchasing organizations to report all general payments, ownership interests, and research payments paid to allopathic and osteopathic physicians in the US.

The article is here.

The morality of technology

Rahul Matthan
Live Mint
Originally published May 3, 2017

Here is an excerpt:

Another example of the two sides of technology is drones—a modern technology that is already being deployed widely—from the delivery of groceries to ensuring that life saving equipment reaches first responders in high density urban areas. But for every beneficent use of drone tech, there are an equal number of dubious uses that challenge our ethical boundaries. Foremost among these is development of AI-powered killer drones—autonomous flying weapons intelligent enough to accurately distinguish between friend and foe and then, autonomously, take the decision to execute a kill.

This duality is inherent in all of tech. But just because technology can be used for evil, that should not, of itself, be a reason not to use it. We need new technology to better ourselves and the world we live in—and we need to be wise about how we apply it so that our use remains consistent with the basic morality inherent in modern society. This implies that each time we make a technological breakthrough we must assess afresh, the contexts within which they could present themselves and the uses to which they should (and should not) be put. If required, we must take the trouble to re-draw our moral boundaries, establishing the limits within which they must be constrained.

The article is here.

Sunday, May 21, 2017

What do we evaluate when we evaluate moral character?

Erik G. Helzer & Clayton R. Critcher

Abstract:

Despite growing interest in the topic of moral character, there is very little precision
and a lack of agreement among researchers as to what is evaluated when people evaluate
character. In this chapter we define moral character in novel social cognitive terms and offer
empirical support for the idea that the central qualities of moral character are those deemed
essential for social relationships.

Here is an excerpt:

We approach this chapter from the theoretical standpoint that the centrality of character
evaluation is due to its function in social life. Evaluation of character is, we think, inherently a
judgment about a person’s qualifications for being a solid long-term social investment. That is,
people attempt to suss out moral character because they want to know whether a particular agent
is the type of person who likely possesses the necessary (even if not sufficient) qualities they
expect in a social relationship. In developing these ideas theoretically and empirically, we
consider what form moral character takes, discuss what this proposal suggests about how people
may and do assess others’ moral character, and identify an assortment of qualities that our
perspective predicts will be central to moral character.

The book chapter is here.

Saturday, May 20, 2017

Conflict of Interest and the Integrity of the Medical Profession

Allen S. Lichter
JAMA. 2017;317(17):1725-1726.

Physicians have a moral responsibility to patients; they are trusted to place the needs and interests of patients ahead of their own, free of unwarranted outside influences on their decisions. Those who have relationships that might be seen to influence their decisions and behaviors that may affect fulfilling their responsibilities to patients must be fully transparent about them. Two types of interactions and activities involving physicians are most relevant: (1) commercial or research relationships between a physician expert and a health care company designed to advance an idea or promote a product, and (2) various gifts, sponsored meals, and educational offerings that come directly or indirectly to physicians from these companies.

Whether these and other ties to industry are important is not a new issue for medicine. Considerations regarding the potential influence of commercial ties date back at least to the 1950s and 1960s. In 1991, Relman reminded physicians that they have “a unique opportunity to assume personal responsibility for important decisions that are not influenced by or subordinated to the purposes of third parties.” However, examples of potential subordination are easily found. There are reports of physicians who are paid handsomely to promote a drug or device, essentially serving as a company spokesperson; of investigators who have ownership in the company that stands to gain if the clinical trial is successful; and of clinical guideline panels that are dominated by experts with financial ties to companies whose products are relevant to the disease or condition at hand.

The article is here.

Friday, May 19, 2017

Conflict of Interest: Why Does It Matter?

Harvey V. Fineberg
JAMA. 2017;317(17):1717-1718.

Preservation of trust is the essential purpose of policies about conflict of interest. Physicians have many important roles including caring for individual patients, protecting the public’s health, engaging in research, reporting scientific and clinical discoveries, crafting professional guidelines, and advising policy makers and regulatory bodies. Success in all these functions depends on others—laypersons, professional peers, and policy leaders—believing and acting on the word of physicians. Therefore, the confidence of others in physician judgment is of paramount importance. When trust in physician judgment is impaired, the role of physicians is diminished.

Physicians should make informed, disinterested judgments. To be disinterested means being free of personal advantage. The type of advantage that is typically of concern in most situations involving physicians is financial. When referring to conflict of interest, the term generally means a financial interest that relates to the issue at hand. More specifically, a conflict of interest can be discerned by using a reasonable person standard; ie, a conflict of interest exists when a reasonable person would interpret the financial circumstances pertaining to a situation as potentially sufficient to influence the judgment of the physician in question.

The article is here.

Moral transgressions corrupt neural representations of value

Molly J Crockett, J. Siegel, Z. Kurth-Nelson, P. Dayan & R. Dolan
Nature Neuroscience

Abstract

Moral systems universally prohibit harming others for personal gain. However, we know little about how such principles guide moral behavior. Using a task that assesses the financial cost participants ascribe to harming others versus themselves, we probed the relationship between moral behavior and neural representations of profit and pain. Most participants displayed moral preferences, placing a higher cost on harming others than themselves. Moral preferences correlated with neural responses to profit, where participants with stronger moral preferences had lower dorsal striatal responses to profit gained from harming others. Lateral prefrontal cortex encoded profit gained from harming others, but not self, and tracked the blameworthiness of harmful choices. Moral decisions also modulated functional connectivity between lateral prefrontal cortex and the profit-sensitive region of dorsal striatum. The findings suggest moral behavior in our task is linked to a neural devaluation of reward realized by a prefrontal modulation of striatal value representations.

The article is here.

Thursday, May 18, 2017

The secret to honesty revealed: it feels better

Henry Bodkin
The Telegraph
Originally published May 1, 2017

It is a mystery that has perplexed psychologists and philosophers since the dawn of humanity: why are most people honest?

Now, using a complex array of MRI machines and electrocution devices, scientists claim to have found the answer.

(cut)

“Our findings suggest the brain internalizes the moral judgments of others, simulating how much others might blame us for potential wrongdoing, even when we know our actions are anonymous,” said Dr Crockett.

The scans also revealed that an area of the brain involved in making moral judgments, the lateral prefrontal cortex, was most active in trials where inflicting pain yielded minimal profit.

The article is here.

Morality constrains the default representation of what is possible

Phillips J; Cushman F
Proc Natl Acad Sci U S A.  2017;  (ISSN: 1091-6490)

The capacity for representing and reasoning over sets of possibilities, or modal cognition, supports diverse kinds of high-level judgments: causal reasoning, moral judgment, language comprehension, and more. Prior research on modal cognition asks how humans explicitly and deliberatively reason about what is possible but has not investigated whether or how people have a default, implicit representation of which events are possible. We present three studies that characterize the role of implicit representations of possibility in cognition. Collectively, these studies differentiate explicit reasoning about possibilities from default implicit representations, demonstrate that human adults often default to treating immoral and irrational events as impossible, and provide a case study of high-level cognitive judgments relying on default implicit representations of possibility rather than explicit deliberation.

The paper is here.

Wednesday, May 17, 2017

Moral conformity in online interactions

Meagan Kelly, Lawrence Ngo, Vladimir Chituc, Scott Huettel, and Walter Sinnott-Armstrong
Social Influence 

Abstract

Over the last decade, social media has increasingly been used as a platform for political and moral discourse. We investigate whether conformity, specifically concerning moral attitudes, occurs in these virtual environments apart from face-to-face interactions. Participants took an online survey and saw either statistical information about the frequency of certain responses, as one might see on social media (Study 1), or arguments that defend the responses in either a rational or emotional way (Study 2). Our results show that social information shaped moral judgments, even in an impersonal digital setting. Furthermore, rational arguments were more effective at eliciting conformity than emotional arguments. We discuss the implications of these results for theories of moral judgment that prioritize emotional responses.

The article is here.

Where did Nazi doctors learn their ethics? From a textbook

Michael Cook
BioEdge.org
Originally posted April 29, 2017

German medicine under Hitler resulted in so many horrors – eugenics, human experimentation, forced sterilization, involuntary euthanasia, mass murder – that there is a temptation to say that “Nazi doctors had no ethics”.

However, according to an article in the Annals of Internal Medicine by Florian Bruns and Tessa Chelouche (from Germany and Israel respectively), this was not the case at all. In fact, medical ethics was an important part of the medical curriculum between 1939 and 1945. Nazi officials established lectureships in every medical school in Germany for a subject called “Medical Law and Professional Studies” (MLPS).

There was no lack of ethics. It was just the wrong kind of ethics.

(cut)

It is important to realize that ethical reasoning can be corrupted and that teaching ethics is, in itself, no guarantee of the moral integrity of physicians.

The article is here.

Tuesday, May 16, 2017

Talking in Euphemisms Can Chip Away at Your Sense of Morality

Laura Niemi, Alek Chakroff, and Liane Young
The Science of Us
Originally published April 7, 2017

Here is an excerpt:

Taken together, the results suggest that unethical behavior becomes easier when we perceive our own actions in indirect terms, which makes things that we would otherwise balk at seem a bit more palatable. In other words, deploying indirect speech doesn’t just help us evade blame from others — it also helps us to convince ourselves that unethical acts aren’t so bad after all.

That’s not to say that this is a conscious process. A speaker who shrouds his harmful intentions in indirect speech may understand that this will help him hold on to his standing in the public eye, or maintain his reputation among those closest to him — a useful tactic when those intentions are likely to be condemned or fall outside the bounds of socially acceptable behavior. But that same speaker may be unaware of just how much their indirect speech is easing their own psyche, too.

The article is here.

Why are we reluctant to trust robots?

Jim Everett, David Pizarro and Molly Crockett
The Guardian
Originally posted April 27, 2017

Technologies built on artificial intelligence are revolutionising human life. As these machines become increasingly integrated in our daily lives, the decisions they face will go beyond the merely pragmatic, and extend into the ethical. When faced with an unavoidable accident, should a self-driving car protect its passengers or seek to minimise overall lives lost? Should a drone strike a group of terrorists planning an attack, even if civilian casualties will occur? As artificially intelligent machines become more autonomous, these questions are impossible to ignore.

There are good arguments for why some ethical decisions ought to be left to computers—unlike human beings, machines are not led astray by cognitive biases, do not experience fatigue, and do not feel hatred toward an enemy. An ethical AI could, in principle, be programmed to reflect the values and rules of an ideal moral agent. And free from human limitations, such machines could even be said to make better moral decisions than us. Yet the notion that a machine might be given free reign over moral decision-making seems distressing to many—so much so that, for some, their use poses a fundamental threat to human dignity. Why are we so reluctant to trust machines when it comes to making moral decisions? Psychology research provides a clue: we seem to have a fundamental mistrust of individuals who make moral decisions by calculating costs and benefits – like computers do.

The article is here.

Monday, May 15, 2017

Overcoming patient reluctance to be involved in medical decision making

J.S. Blumenthal-Barby
Patient Education and Counseling
January 2017, Volume 100, Issue 1, Pages 14–17

Abstract

Objective

To review the barriers to patient engagement and techniques to increase patients’ engagement in their medical decision-making and care.

Discussion

Barriers exist to patient involvement in their decision-making and care. Individual barriers include education, language, and culture/attitudes (e.g., deference to physicians). Contextual barriers include time (lack of) and timing (e.g., lag between test results being available and patient encounter). Clinicians should gauge patients’ interest in being involved and their level of current knowledge about their condition and options. Framing information in multiple ways and modalities can enhance understanding, which can empower patients to become more engaged. Tools such as decision aids or audio recording of conversations can help patients remember important information, a requirement for meaningful engagement. Clinicians and researchers should work to create social norms and prompts around patients asking questions and expressing their values. Telehealth and electronic platforms are promising modalities for allowing patients to ask questions on in a non-intimidating atmosphere.

Conclusion

Researchers and clinicians should be motivated to find ways to engage patients on the ethical imperative that many patients prefer to be more engaged in some way, shape, or form; patients have better experiences when they are engaged, and engagement improves health outcomes.

The article is here.

Cassandra’s Regret: The Psychology of Not Wanting to Know

Gigerenzer, Gerd; Garcia-Retamero, Rocio
Psychological Review, Vol 124(2), Mar 2017, 179-196.

Abstract

Ignorance is generally pictured as an unwanted state of mind, and the act of willful ignorance may raise eyebrows. Yet people do not always want to know, demonstrating a lack of curiosity at odds with theories postulating a general need for certainty, ambiguity aversion, or the Bayesian principle of total evidence. We propose a regret theory of deliberate ignorance that covers both negative feelings that may arise from foreknowledge of negative events, such as death and divorce, and positive feelings of surprise and suspense that may arise from foreknowledge of positive events, such as knowing the sex of an unborn child. We conduct the first representative nationwide studies to estimate the prevalence and predictability of deliberate ignorance for a sample of 10 events. Its prevalence is high: Between 85% and 90% of people would not want to know about upcoming negative events, and 40% to 70% prefer to remain ignorant of positive events. Only 1% of participants consistently wanted to know. We also deduce and test several predictions from the regret theory: Individuals who prefer to remain ignorant are more risk averse and more frequently buy life and legal insurance. The theory also implies the time-to-event hypothesis, which states that for the regret-prone, deliberate ignorance is more likely the nearer the event approaches. We cross-validate these findings using 2 representative national quota samples in 2 European countries. In sum, we show that deliberate ignorance exists, is related to risk aversion, and can be explained as avoiding anticipatory regret.



The article is here.

Sunday, May 14, 2017

The power thinker

Colin Koopman
Originally posted March 15, 2017

Here is an excerpt:

Foucault’s work shows that disciplinary power was just one of many forms that power has come to take over the past few hundred years. Disciplinary anatomo-politics persists alongside sovereign power as well as the power of bio-politics. In his next book, The History of Sexuality, Foucault argued that bio-politics helps us to understand how garish sexual exuberance persists in a culture that regularly tells itself that its true sexuality is being repressed. Bio-power does not forbid sexuality, but rather regulates it in the maximal interests of very particular conceptions of reproduction, family and health. It was a bio-power wielded by psychiatrists and doctors that, in the 19th century, turned homosexuality into a ‘perversion’ because of its failure to focus sexual activity around the healthy reproductive family. It would have been unlikely, if not impossible, to achieve this by sovereign acts of direct physical coercion. Much more effective were the armies of medical men who helped to straighten out their patients for their own supposed self-interest.

Other forms of power also persist in our midst. Some regard the power of data – that is the info-power of social media, data analytics and ceaseless algorithmic assessment – as the most significant kind of power that has emerged since Foucault’s death in 1984.

The article is here.

Saturday, May 13, 2017

Justices Blast One-Stop-Shop Experts in Alabama

Tim Ryan
Courthouse News
Originally posted April 24, 2017

The Supreme Court’s liberal justices shredded an argument by Alabama’s solicitor general Monday that criminal defendants are not entitled to a mental health expert separate from the ones tapped by prosecutors.

McWilliams v. Dunn, the case the Supreme Court heard this morning, is nested inside the court’s 1984 decision in Ake v. Oklahoma, which held that poor criminal defendants using a defense of insanity are entitled to an expert to help support their claim.

A split has emerged in the 30 years since the decision, with some states deciding one expert helping both the prosecution and defense satisfies the requirement, and others choosing to assign an expert for the defendant to use exclusively.

The article is here.

Friday, May 12, 2017

US Suicide Rates Display Growing Geographic Disparity.

JAMA.
2017;317(16):1616. doi:10.1001/jama.2017.4076

As the overall US suicide rate increases, a CDC study showed that the trend toward higher rates in less populated parts of the country and lower rates in large urban areas has become more pronounced.

Using data from the National Vital Statistics System and the US Census Bureau, the researchers reported that from 1999 to 2015, the annual suicide rate increased by 14%, from 12.6 to 14.4 per 100, 000 US residents aged 10 years or older.

(cut)

Higher suicide rates in less urban areas could be linked with limited access to mental health care, the opioid overdose epidemic, and social isolation, the investigators suggested. The 2007-2009 economic recession may have caused the sharp upswing, they added, because rural areas and small towns were hardest hit.

The article is here

Physicians, Not Conscripts — Conscientious Objection in Health Care

Ronit Y. Stahl and Ezekiel J. Emanuel
N Engl J Med 2017; 376:1380-1385

“Conscience clause” legislation has proliferated in recent years, extending the legal rights of health care professionals to cite their personal religious or moral beliefs as a reason to opt out of performing specific procedures or caring for particular patients. Physicians can refuse to perform abortions or in vitro fertilization. Nurses can refuse to aid in end-of-life care. Pharmacists can refuse to fill prescriptions for contraception. More recently, state legislation has enabled counselors and therapists to refuse to treat lesbian, gay, bisexual, and transgender (LGBT) patients, and in December, a federal judge issued a nationwide injunction against Section 1557 of the Affordable Care Act, which forbids discrimination on the basis of gender identity or termination of a pregnancy.

The article is here, and you need a subscription.

Here is an excerpt:

Objection to providing patients interventions that are at the core of medical practice – interventions that the profession deems to be effective, ethical, and standard treatments – is unjustifiable (AMA Code of Medical Ethics [Opinion 11.2.2]10).

Making the patient paramount means offering and providing accepted medical interventions in accordance with patients’ reasoned decisions. Thus, a health care professional cannot deny patients access to medications for mental health conditions, sexual dysfunction, or contraception on the basis of their conscience, since these drugs are professionally accepted as appropriate medical interventions.

Thursday, May 11, 2017

The Implications of Libertarianism for Compulsory Vaccination

Justin Bernstein
BMJ Blogs
Originally posted April 24, 2017

Here is an excerpt:

Some libertarians, however, attempt to avoid the controversial conclusion that libertarianism is incompatible with compulsory vaccination. In my recent paper, “The Case Against Libertarian Arguments for Compulsory Vaccination,” I argue that such attempts are unsuccessful, and so libertarians must either develop new arguments, or join Senator Paul in opposing compulsory vaccination.

How might a libertarian try to defend compulsory vaccination? One argument is that going unvaccinated exposes others to risk, and this violates their rights. Since the state is permitted to use coercive measures to protect rights, the state may require parents to vaccinate their children. But for libertarians, this argument has two shortcomings. First, there are other, far riskier activities that the libertarian prohibits the government from regulating. For instance, owning and using automobiles or firearms imposes far more significant risk than going unvaccinated, but libertarians defend our rights to own and use automobiles and firearms. Second, one individual going unvaccinated poses very little risk; the risk eventuates only if many collectively go unvaccinated, thereby endangering herd immunity. Imposing such an independently small risk hardly seems to be a rights violation.

The entire blog post is here.

Is There a Duty to Use Moral Neurointerventions?

Michelle Ciurria
Topoi (2017).
doi:10.1007/s11245-017-9486-4

Abstract

Do we have a duty to use moral neurointerventions to correct deficits in our moral psychology? On their surface, these technologies appear to pose worrisome risks to valuable dimensions of the self, and these risks could conceivably weigh against any prima facie moral duty we have to use these technologies. Focquaert and Schermer (Neuroethics 8(2):139–151, 2015) argue that neurointerventions pose special risks to the self because they operate passively on the subject’s brain, without her active participation, unlike ‘active’ interventions. Some neurointerventions, however, appear to be relatively unproblematic, and some appear to preserve the agent’s sense of self precisely because they operate passively. In this paper, I propose three conditions that need to be met for a medical intervention to be considered low-risk, and I say that these conditions cut across the active/passive divide. A low-risk intervention must: (i) pass pre-clinical and clinical trials, (ii) fare well in post-clinical studies, and (iii) be subject to regulations protecting informed consent. If an intervention passes these tests, its risks do not provide strong countervailing reasons against our prima facie duty to undergo the intervention.

The article is here.

Wednesday, May 10, 2017

How do you punish a criminal robot?

Christopher Markou
The Independent
Originally posted on April 20, 2017

Here is an excerpt:

Among the many things that must now be considered is what role and function the law will play. Expert opinions differ wildly on the likelihood and imminence of a future where sufficiently advanced robots walk among us, but we must confront the fact that autonomous technology with the capacity to cause harm is already around. Whether it’s a military drone with a full payload, a law enforcement robot exploding to kill a dangerous suspect or something altogether more innocent that causes harm through accident, error, oversight, or good ol’ fashioned stupidity.

There’s a cynical saying in law that “wheres there’s blame, there’s a claim”. But who do we blame when a robot does wrong? This proposition can easily be dismissed as something too abstract to worry about. But let’s not forget that a robot was arrested (and released without charge) for buying drugs; and Tesla Motors was absolved of responsibility by the American National Highway Traffic Safety Administration when a driver was killed in a crash after his Tesla was in autopilot.

While problems like this are certainly peculiar, history has a lot to teach us. For instance, little thought was given to who owned the sky before the Wright Brothers took the Kitty Hawk for a joyride. Time and time again, the law is presented with these novel challenges. And despite initial overreaction, it got there in the end. Simply put: law evolves.

The article is here.

Who Decides When a Patient Can’t? Statutes on Alternate Decision Makers

Erin S. DeMartino and others
The New England Journal of Medicine
DOI: 10.1056/NEJMms1611497

Many patients cannot make their own medical decisions, having lost what is called decisional capacity. The estimated prevalence of decisional incapacity approaches 40% among adult medical
inpatients and residential hospice patients and exceeds 90% among adults in some intensive care
units.3,4 Patients who lack capacity may guide decisions regarding their own care through an
advance directive, a legal document that records treatment preferences or designates a durable
power of attorney for health care, or both. Unfortunately,the rate of completion of advance directives
in the general U.S. population hovers around 20 to 29%, creating uncertainty about who will
fill the alternate decision-maker role for many patients.

There is broad ethical consensus that other persons may make life-and-death decisions on
behalf of patients who lack decisional capacity. Over the past few decades, many states have enacted
legislation designed to delineate decisionmaking authority for patients who lack advance directives. Yet the 50 U.S. states and the District of Columbia vary in their procedures for appointing and challenging default surrogates, the attributes they require of such persons, their priority ranking of possible decision makers, and dispute resolution. These differences have important implications for clinicians, patients, and public health.

The article is here.

Tuesday, May 9, 2017

Ethics experts question Kushner relatives pushing White House connections in China

Allan Smith
Business Insider
Originally published May 8, 2017

Ethics experts criticized White House senior adviser Jared Kushner's relatives for using White House connections to enhance a presentation to Chinese investors last weekend.

Members of Kushner's family gave multiple presentations in China detailing an opportunity to "invest $500,000 and immigrate to the United States" through a controversial visa program and promoting ties to Kushner and President Donald Trump, according to media reports.

Richard Painter, who was President George W. Bush's top ethics lawyer from 2005 to 2007 and is now a professor at the University of Minnesota, told Business Insider the presentation was "obviously completely inappropriate."

He added that the Kushner family "ought to be disqualified" from the EB-5 visa program they were promoting. The visa is awarded to foreign investors who invest at least $500,000 in US projects that create at least 10 full-time jobs.

The article is here.

Inside Libratus, the Poker AI That Out-Bluffed the Best Humans

Cade Metz
Wired Magazine
Originally published February 1, 2017

Here is an excerpt:

Libratus relied on three different systems that worked together, a reminder that modern AI is driven not by one technology but many. Deep neural networks get most of the attention these days, and for good reason: They power everything from image recognition to translation to search at some of the world’s biggest tech companies. But the success of neural nets has also pumped new life into so many other AI techniques that help machines mimic and even surpass human talents.

Libratus, for one, did not use neural networks. Mainly, it relied on a form of AI known as reinforcement learning, a method of extreme trial-and-error. In essence, it played game after game against itself. Google’s DeepMind lab used reinforcement learning in building AlphaGo, the system that that cracked the ancient game of Go ten years ahead of schedule, but there’s a key difference between the two systems. AlphaGo learned the game by analyzing 30 million Go moves from human players, before refining its skills by playing against itself. By contrast, Libratus learned from scratch.

Through an algorithm called counterfactual regret minimization, it began by playing at random, and eventually, after several months of training and trillions of hands of poker, it too reached a level where it could not just challenge the best humans but play in ways they couldn’t—playing a much wider range of bets and randomizing these bets, so that rivals have more trouble guessing what cards it holds. “We give the AI a description of the game. We don’t tell it how to play,” says Noam Brown, a CMU grad student who built the system alongside his professor, Tuomas Sandholm. “It develops a strategy completely independently from human play, and it can be very different from the way humans play the game.”

The article is here.

Monday, May 8, 2017

Improving Ethical Culture by Measuring Stakeholder Trust

Phillip Nichols and Patricia Dowden
Compliance and Ethics Blog
Originally posted April 10, 2017

Here is an excerpt:

People who study how individuals behave in organizations find that norms are far more powerful than formal rules, even formal rules that are backed up by legal sanctions.[ii] Thus, a norm that guides people to not steal is going to be more effective than a formal rule that prohibits stealing. Therein lies the benefit to a business firm. A strong ethical culture will be far more effective than formal rules (although of course there is still a need for formal rules).

When the “ethical culture” component of a business firm’s overall culture is strong – when norms and other things guide people in that firm to make sound ethical and social decisions – the firm benefits in two ways: it enhances the positive and controls the negative. In terms of enhancing the positive,  a strong ethical culture increases the amount of loyalty and commitment that people associated with a business firm have towards that firm. A strong ethical culture also contributes to higher levels of job satisfaction. People who are loyal and committed to a business firm are more likely to make “sacrifices” for that firm, meaning they are more likely to do things like working late or on weekends in order to get a project done, or help another department when that department needs extra help. People who are loyal and committed to a firm are more likely to defend that firm against accusers, and to stand by the firm in times of crisis. Workers who have high levels of job satisfaction are more likely to stay with a firm, and are more likely to refer customers to that firm and to recruit others to work for that firm.

The blog post is here.

Raising good robots

Regina Rini
aeon.com
Originally published April 18, 2017

Intelligent machines, long promised and never delivered, are finally on the horizon. Sufficiently intelligent robots will be able to operate autonomously from human control. They will be able to make genuine choices. And if a robot can make choices, there is a real question about whether it will make moral choices. But what is moral for a robot? Is this the same as what’s moral for a human?

Philosophers and computer scientists alike tend to focus on the difficulty of implementing subtle human morality in literal-minded machines. But there’s another problem, one that really ought to come first. It’s the question of whether we ought to try to impose our own morality on intelligent machines at all. In fact, I’d argue that doing so is likely to be counterproductive, and even unethical. The real problem of robot morality is not the robots, but us. Can we handle sharing the world with a new type of moral creature?

We like to imagine that artificial intelligence (AI) will be similar to humans, because we are the only advanced intelligence we know. But we are probably wrong. If and when AI appears, it will probably be quite unlike us. It might not reason the way we do, and we could have difficulty understanding its choices.

The article is here.

Sunday, May 7, 2017

Individual Differences in Moral Disgust Do Not Predict Utilitarian Judgments, Sexual and Pathogen Disgust Do

Michael Laakasuo, Jukka Sundvall & Marianna Drosinou
Scientific Reports 7, Article number: 45526 (2017)
doi:10.1038/srep45526

Abstract

The role of emotional disgust and disgust sensitivity in moral judgment and decision-making has been debated intensively for over 20 years. Until very recently, there were two main evolutionary narratives for this rather puzzling association. One of the models suggest that it was developed through some form of group selection mechanism, where the internal norms of the groups were acting as pathogen safety mechanisms. Another model suggested that these mechanisms were developed through hygiene norms, which were piggybacking on pathogen disgust mechanisms. In this study we present another alternative, namely that this mechanism might have evolved through sexual disgust sensitivity. We note that though the role of disgust in moral judgment has been questioned recently, few studies have taken disgust sensitivity to account. We present data from a large sample (N = 1300) where we analyzed the associations between The Three Domain Disgust Scale and the most commonly used 12 moral dilemmas measuring utilitarian/deontological preferences with Structural Equation Modeling. Our results indicate that of the three domains of disgust, only sexual disgust is associated with more deontological moral preferences. We also found that pathogen disgust was associated with more utilitarian preferences. Implications of the findings are discussed.

The article is here.

Saturday, May 6, 2017

Investigating Altruism and Selfishness Through the Hypothetical Use of Superpowers

Ahuti Das-Friebel, Nikita Wadhwa, Merin Sanil, Hansika Kapoor, Sharanya V.
Journal of Humanistic Psychology 
First published date: April-13-2017
10.1177/0022167817699049

Abstract

Drawing from literature associating superheroes with altruism, this study examined whether ordinary individuals engaged in altruistic or selfish behavior when they were hypothetically given superpowers. Participants were presented with six superpowers—three positive (healing, invulnerability, and flight) and three negative (fear inducement, psychic persuasion, and poison generation). They indicated their desirability for each power, what they would use it for (social benefit, personal gain, social harm), and listed examples of such uses. Quantitative analyses (n = 285) revealed that 94% of participants wished to possess a superpower, and majority indicated using powers for benefiting themselves than for altruistic purposes. Furthermore, while men wanted positive and negative powers more, women were more likely than men to use such powers for personal and social gain. Qualitative analyses of the uses of the powers (n = 524) resulted in 16 themes of altruistic and selfish behavior. Results were analyzed within Pearce and Amato’s model of helping behavior, which was used to classify altruistic behavior, and adapted to classify selfish behavior. In contrast to how superheroes behave, both sets of analyses revealed that participants would hypothetically use superpowers for selfish rather than altruistic purposes. Limitations and suggestions for future research are outlined.

The article is here.

Friday, May 5, 2017

When Therapists Make Mistakes

Keely Kolmes
drkolmes.com
Originally published August 10, 2009

We don’t often talk about therapeutic blunders, although they happen all the time. There are so many ways for therapists to fail clients. There is probably the most common: a mismatch of styles, or a therapist who is not really helping her client. Then there are those moments when perhaps we fail our clients by not responding in the moment in the way the client might desire. Maybe we sometimes challenge when we should nurture. Or we nurture when we should challenge. Or we may do any number of subtle things, perhaps below the threshold of consciousness, not even fully acknowledged by our clients, but which create distance, disappointment, or detachment. Some examples of this are the stifling of yawns, spacing out for a moment, or failing to remember an important name or detail and the client feels we are not really fully present or engaged with them. This lack of connection may trigger feelings of disappointment, loss, or abandonment. For clients with relational traumas, events such as vacations, emergencies, or even adjustments in session times may also cause feelings of loss and abandonment.

Recently, I was having one of those weeks. The details aren’t important, but I’ll acknowledge that I had taken on a few too many things. Top it off with having a few people needing to meet at different times. Add to that one way I manage client confidentiality: putting client names into my hard calendar (which I do not carry about with me) and then transcribing the sessions later to my iPhone calender simply as “client,” to preserve confidentiality in the event that my phone is lost or stolen.

The result?

The blog post is here.

The Duty to be Morally Enhanced

Persson, I. & Savulescu, J.
Topoi (2017)
doi:10.1007/s11245-017-9475-7

Abstract

We have a duty to try to develop and apply safe and cost-effective means to increase the probability that we shall do what we morally ought to do. It is here argued that this includes biomedical means of moral enhancement, that is, pharmaceutical, neurological or genetic means of strengthening the central moral drives of altruism and a sense of justice. Such a strengthening of moral motivation is likely to be necessary today because common-sense morality having its evolutionary origin in small-scale societies with primitive technology will become much more demanding if it is revised to serve the needs of contemporary globalized societies with an advanced technology capable of affecting conditions of life world-wide for centuries to come.

The article is here.

Thursday, May 4, 2017

Rude Doctors, Rude Nurses, Rude Patients

Perri Klass
The New York Times
Originally published April 10, 2017

Here is an excerpt:

None of that is a surprise, and in fact, there is a good deal of literature to suggest that the medical environment includes all kinds of harshness, and that much of the rudeness you encounter as a doctor or nurse is likely to come from colleagues and co-workers.  An often-cited British study from 2015 called “Sticks and Stones” reported that rude, dismissive and aggressive communication between doctors (inevitably abbreviated, in a medical journal, as RDA communication) affected 31 percent of doctors several times a week or more. The researchers found that rudeness was more common from certain medical specialties: radiology, general surgery, neurosurgery and cardiology. They also established that higher status was somewhat protective; junior doctors and trainees encountered more rudeness.

In the United States, a number of studies have looked at how rudeness affects medical students and medical residents, as part of tracking the different ways in which they are often mistreated.

One article earlier this year in the journal Medical Teacher charted the effect on medical student morale of a variety of experiences, including verbal and nonverbal mistreatment, by everyone from attending physicians to residents to nurses. Mistreatment of medical students, the authors argued, may actually reflect serious problems on the part of their teachers, such as burnout, depression or substance abuse; it’s not enough to classify the “perpetrators” (that is, the rude people) as unprofessional and tell them to stop.

The article is here.

Ethics agency to review waivers for Trump appointees

Bill Allison
The World Daily
Originally published April 29, 2017

The federal ethics agency is reviewing every waiver of conflict-of-interest rules that President Donald Trump’s appointees have received.

A memorandum from the U.S. Office of Government Ethics seeks documentation of waivers granted to appointees ordinarily required to recuse themselves from matters in which they or family members have a financial interest.

Issued by the agency’s director, Walter Shaub, it specifies that all agencies and appointees, “including White House officials,” must comply with the notice, which covers appointees in the administrations of Trump and Barack Obama.

The article is here.

Wednesday, May 3, 2017

Ethics office says it wasn’t consulted about Ivanka Trump job

CNN Wire
Originally published May 2, 2017

The White House brought Ivanka Trump on as an adviser without consulting the Office of Government Ethics, the ethics office says.

The New York Times and Politico reported March 20 that the president’s older daughter was working out of a West Wing office. A White House official told CNN that she would get a security clearance but would not be considered a government employee.

The next day, White House Press Secretary Sean Spicer assured reporters that Ivanka Trump would follow the ethics restrictions that apply to federal employees. He said she was acting “in consultation with the Office of Government Ethics.”

But the ethics office, in a letter made public Monday, said it was not consulted. Director Walter Shaub said he reached out to the White House and to Ivanka Trump’s lawyer on March 24 to tell them that Ivanka Trump should be considered a federal employee, subject to those rules.

Teaching Ethics Should Be a STEM Essential

Ann Jolly
Middle Web
Originally posted October 11, 2015

Here is an excerpt:

Do you have ethics built into your STEM curriculum? What does that look like? For a start I’m envisioning kids in their teams debating solutions to problems, looking at possible consequences of those solutions, and examining the trade-offs they’d have to make.

Some types of real-world problems lend themselves readily to ethical deliberations. Proposed environmental solutions for cleaner air, for example, resulted in push-back from some industries that faced investing more money in equipment, and even from some citizens who feared a rise in price for the products these industries produce. So how do you lead your students through a productive discussion of these issues?

In my search for answers to that question I located a free Ethics Primer from the Northwest Association for Biomedical Research (downloadable as a PDF). This publication strongly recommends that the study of ethics begin through exploring a case study or a scenario.

A STEM lesson provides a perfect kickoff for an ethics discussion, since a scenario generally accompanies the real-world problem kids are trying to solve. From there, ethics principles and practices can be built naturally into the lesson.

The article is here.

Tuesday, May 2, 2017

Ethics office details conflict of interest rules for Ivanka Trump

Olivia Beavers
The Hill
Originally posted May 1, 2017

Here is an excerpt:

Shaub said the ethics rules prevent top White House appointees “from participating personally and substantially in particular matters directly and predictably affecting their financial interests.” They typically do so by recusing themselves from “particular issues that would affect the appointee's personal and imputed financial interests.”

The ethics office plans to review Ivanka Trump’s disclosures after they are filed.

“After the report is revised, OGE seeks information about how the White House is addressing any potential conflicts of interest identified during the review process,” Shaub continued. “OGE then makes a determination regarding apparent compliance with financial disclosure and conflict of interest rules and either certifies or declines to certify the financial disclosure report.”

The article is here.

AI Learning Racism, Sexism and Other Prejudices from Humans

Ian Johnston
The Independent
Originally published April 13, 2017

Artificially intelligent robots and devices are being taught to be racist, sexist and otherwise prejudiced by learning from humans, according to new research.

A massive study of millions of words online looked at how closely different terms were to each other in the text – the same way that automatic translators use “machine learning” to establish what language means.

Some of the results were stunning.

(cut)

“We have demonstrated that word embeddings encode not only stereotyped biases but also other knowledge, such as the visceral pleasantness of flowers or the gender distribution of occupations,” the researchers wrote.

The study also implies that humans may develop prejudices partly because of the language they speak.

“Our work suggests that behaviour can be driven by cultural history embedded in a term’s historic use. Such histories can evidently vary between languages,” the paper said.

The article is here.

Would You Become An Immortal Machine?

Marcelo Gleiser
npr.org
Originally posted March 27, 2017

Here is an excerpt:

"A man is a god in ruins," wrote Ralph Waldo Emerson. This quote, which O'Connell places at the book's opening page, captures the essence of the quest. If man is a failed god, there may be a way to fix this. Since "The Fall," we "lost" our god-like immortality, and have been looking for ways to regain it. Can science do this? Is mortality merely a scientific question? Suppose that it is — and that we can fix it, as we can a headache. Would you pay the price by transferring your "essence" to a non-human entity that will hold it, be it silicone or some kind of artificial robot? Can you be you when you don't have your body? Are you really just transferrable information?

As O'Connell meets an extraordinary group of people, from serious scientists and philosophers to wackos, he keeps asking himself this question, knowing fully well his answer: Absolutely not! What makes us human is precisely our fallibility, our connection to our bodies, the existential threat of death. Remove that and we are a huge question mark, something we can't even contemplate. No thanks, says O'Connell, in a deliciously satiric style, at once lyrical, informative, and captivating.

The article is here.

Monday, May 1, 2017

Is Healthcare a Right? A Privilege? Something Entirely Different?

Brian Joondeph
The Health Care Blog
Originally published April 8, 2017

Here is an excerpt:

Most developed countries have parallel public and private healthcare systems. A public option covering everyone, with minimal or no out-of-pocket expense to patients, but with long wait times for care and limited treatment options. And a private option allowing individuals to purchase the healthcare or insurance they want and need, paying for it themselves, without subsidies, tax breaks or any government assistance. One option a right, the other a privilege.

For an analogy, think of K-12 schools. A public option available without cost to students. For most, a good and more than adequate education. And a free-market private school option for those who desire and have the means. Shop around, pay as much as you want, or default to the public option.

Each system has its pros and cons, but they are separate and distinct. Instead we are trying to combine both into a single scheme — Obamacare, Ryancare or whatever finally emerges from Congress. We get the worst of both systems – bureaucracy and high cost. And the best of neither – no universal coverage and limited freedom of choice.

The blog post is here.

Are Moral Judgments Good or Bad Things?

Robb Willer & Brent Simpson
Scientific American
Originally published April 10, 2017

Here is an excerpt:

Beyond the harms, there is also hypocrisy. It is not uncommon to discover that those who make moral judgments—public evaluations of the rightness or wrongness of others’ behavior—do not themselves conform to the moral norms they eagerly enforce. Think, for instance, of politicians or religious leaders who oppose gay rights but are later discovered soliciting sex from other men. These examples and others seem to make it clear: moral judgments are antisocial, a bug in the code of society.

But recent research challenges this view, suggesting that moral judgments are a critical part of the social fabric, a force that encourages people to consider the welfare of others. Our work, and that of others, implies that—while sometimes disadvantageous—moral judgments have important, positive effects for individuals and the groups they inhabit.

(cut)

To summarize, we find that moral judgments of unethical behavior are generally viewed as a legitimate means for maintaining group-beneficial norms of conduct. Those who use them are generally seen as moral and trustworthy, and individuals typically act more morally after communicating judgments of others.

The article is here.