Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, October 31, 2018

We’re Worrying About the Wrong Kind of AI

Mark Buchanan
Originally posted June 11, 2018

No computer has yet shown features of true human-level artificial intelligence much less conscious awareness. Some experts think we won't see it for a long time to come. And yet academics, ethicists, developers and policy-makers are already thinking a lot about the day when computers become conscious; not to mention worries about more primitive AI being used in defense projects.

Now consider that biologists have been learning to grow functioning “mini brains” or “brain organoids” from real human cells, and progress has been so fast that researchers are actually worrying about what to do if a piece of tissue in a lab dish suddenly shows signs of having conscious states or reasoning abilities. While we are busy focusing on computer intelligence, AI may arrive in living form first, and bring with it a host of unprecedented ethical challenges.

In the 1930s, the British mathematician Alan Turing famously set out the mathematical foundations for digital computing. It's less well known that Turing later pioneered the mathematical theory of morphogenesis, or how organisms develop from single cells into complex multicellular beings through a sequence of controlled transformations making increasingly intricate structures. Morphogenesis is also a computation, only with a genetic program controlling not just 0s and 1s, but complex chemistry, physics and cellular geometry.

Following Turing's thinking, biologists have learned to control the computation of biological development so accurately that lab growth of artificial organs, even brains, is no longer science fiction.

The information is here.

Learning Others’ Political Views Reduces the Ability to Assess and Use Their Expertise in Nonpolitical Domains

Marks, Joseph and Copland, Eloise and Loh, Eleanor and Sunstein, Cass R. and Sharot, Tali.
Harvard Public Law Working Paper No. 18-22. (April 13, 2018).


On political questions, many people are especially likely to consult and learn from those whose political views are similar to their own, thus creating a risk of echo chambers or information cocoons. Here, we test whether the tendency to prefer knowledge from the politically like-minded generalizes to domains that have nothing to do with politics, even when evidence indicates that person is less skilled in that domain than someone with dissimilar political views. Participants had multiple opportunities to learn about others’ (1) political opinions and (2) ability to categorize geometric shapes. They then decided to whom to turn for advice when solving an incentivized shape categorization task. We find that participants falsely concluded that politically like-minded others were better at categorizing shapes and thus chose to hear from them. Participants were also more influenced by politically like-minded others, even when they had good reason not to be. The results demonstrate that knowing about others’ political views interferes with the ability to learn about their competency in unrelated tasks, leading to suboptimal information-seeking decisions and errors in judgement. Our findings have implications for political polarization and social learning in the midst of political divisions.

You can download the paper here.

Probably a good resource to contemplate before discussing politics in psychotherapy.

Tuesday, October 30, 2018

How Trump’s Hateful Speech Raises the Risks of Violence

Cass Sunstein
Originally posted October 28, 2018

Here is an excerpt:

Is President Donald Trump responsible, in some sense, for the mailing of bombs to Hillary Clinton and other Democratic leaders? Is he responsible, in some sense, for the slaughter at the Pittsburgh synagogue?

If we are speaking in terms of causation, the most reasonable answer to both questions, and the safest, is: We don’t really know. More specifically, we don’t know whether these particular crimes would have occurred in the absence of Trump’s hateful and vicious rhetoric (including his enthusiasm for the despicable cry, “Lock her up!”).

But it’s also safe, and plenty reasonable, to insist that across the American population, hateful and vicious rhetoric from the president of the United States is bound to increase risks of violence. Because of that rhetoric, the likelihood of this kind of violence is greater than it would otherwise be. The president is responsible for elevating the risk that people will try to kill Democrats and others seen by some of his followers as “enemies of the people” (including journalists and Jews).

To see why, we should investigate one of the most striking findings in modern social psychology that has been replicated on dozens of occasions. It goes by the name of “group polarization.”

The basic idea is that when people are listening and talking to one another, they tend to end up in a more extreme position in the same direction of the views with which they began. Groups of like-minded people can become radicalized.

The info is here.

West Virginia Poll examines moral and social issues

Brad McElhinny
Originally posted September 30, 2018

Here is an excerpt:

Role of God in morality

There was a 50-50 split in a question asking respondents to select the statement that best reflects their view of the role of God in morality.

Half responded, “It is not necessary to believe in God in order to be moral and have good values.”

The other half of respondents chose the option “It is necessary to believe in God in order to be moral and have good values.”

“The two big, significant differences are younger people and self-identified conservatives who have opposite points of view on this question,” said professional pollster Rex Repass, the author of the West Virginia Poll.

Of younger people — those between ages 18 and 34 — 60 percent said it’s not necessary to believe in God to have good moral and ethical values.

That compared to 35 percent of those ages 55-64 who answered with that statement.

“So generally, if you’re under 35, you’re more likely to say it’s not necessary to say have a higher being in your life to have good values,” Repass said.

“If you’re older that percentage increases. You’re more likely to believe you have to have God in your life to be moral and have good values.”

Of respondents who labeled themselves as conservative, 73 percent said it is necessary to believe in God to have moral values.

The info is here.

Monday, October 29, 2018

We hold people with power to account. Why not algorithms?

Hannah Fry
The Guardian
Originally published September 17, 2018

Here is an excerpt:

But already in our hospitals, our schools, our shops, our courtrooms and our police stations, artificial intelligence is silently working behind the scenes, feeding on our data and making decisions on our behalf. Sure, this technology has the capacity for enormous social good – it can help us diagnose breast cancer, catch serial killers, avoid plane crashes and, as the health secretary, Matt Hancock, has proposed, potentially save lives using NHS data and genomics. Unless we know when to trust our own instincts over the output of a piece of software, however, it also brings the potential for disruption, injustice and unfairness.

If we permit flawed machines to make life-changing decisions on our behalf – by allowing them to pinpoint a murder suspect, to diagnose a condition or take over the wheel of a car – we have to think carefully about what happens when things go wrong.

Back in 2012, a group of 16 Idaho residents with disabilities received some unexpected bad news. The Department of Health and Welfare had just invested in a “budget tool” – a swish piece of software, built by a private company, that automatically calculated their entitlement to state support. It had declared that their care budgets should be slashed by several thousand dollars each, a decision that would put them at serious risk of being institutionalised.

The problem was that the budget tool’s logic didn’t seem to make much sense. While this particular group of people had deep cuts to their allowance, others in a similar position actually had their benefits increased by the machine. As far as anyone could tell from the outside, the computer was essentially plucking numbers out of thin air.

The info is here.

The dismantling of informed consent is a disaster

David Penner
Originally posted September 26, 2018

Informed consent is the cornerstone of medical ethics. And every physician must defend this sacred principle from every form of evil that would seek to dismantle, degrade and debase it. If informed consent is the sun, then privacy, confidentiality, dignity, and trust are planets that go around it. For without informed consent, the descent of health care into amorality is inevitable, and the doctor-patient relationship is doomed to ruination, oblivion, and despair. It is also important to acknowledge the fact that a lack of informed consent has become endemic to our health care system.

This betrayal of patient trust is inextricably linked to three violations: a rape of the body, a rape of the mind and a rape of the soul. The rape of the mind is anchored in a willful nondisclosure of common long-term side effects associated with powerful drugs, such as opioids and certain types of chemotherapy. When a patient starts a chemotherapy regimen, they are typically briefed by a nurse, who proceeds to educate them regarding common short-term side effects such as mouth sores, constipation, and nausea, while failing to mention any of the typical long-term side effects, such as cognitive difficulties and early menopause. It is the long-term side effects that underscore the tragedy of having to resort to chemotherapy, as they can have a devastating impact on a patient’s quality of life, even long after remission has been attained.

The info is here.

Sunday, October 28, 2018

Moral enhancement and the good life

Hazem Zohny
Med Health Care and Philos (2018).


One approach to defining enhancement is in the form of bodily or mental changes that tend to improve a person’s well-being. Such a “welfarist account”, however, seems to conflict with moral enhancement: consider an intervention that improves someone’s moral motives but which ultimately diminishes their well-being. According to the welfarist account, this would not be an instance of enhancement—in fact, as I argue, it would count as a disability. This seems to pose a serious limitation for the account. Here, I elaborate on this limitation and argue that, despite it, there is a crucial role for such a welfarist account to play in our practical deliberations about moral enhancement. I do this by exploring four scenarios where a person’s motives are improved at the cost of their well-being. A framework emerges from these scenarios which can clarify disagreements about moral enhancement and help sharpen arguments for and against it.

The article is here.

Saturday, October 27, 2018

Obtaining consensus in psychotherapy: What holds us back?

Goldfried, M.R.
American Psychologist


Although the field of psychotherapy has been in existence for well over a century, it nonetheless continues to be preparadigmatic, lacking a consensus or scientific core. Instead, it is characterized by a large and increasing number of different schools of thought. In addition to the varying ways in which psychotherapy has been conceptualized, there also exists a long-standing gap between psychotherapy research and how it is conducted in actual clinical practice. Finally, there also exists a tendency to place great emphasis on what is new, often rediscovering or reinventing past contributions. This article describes each of these impediments to obtaining consensus and offers some suggestions for what might be done to address them.

Here is an excerpt:

There are at least three problematic issues that seem to contribute to the difficulty we have in obtaining a consensus within the field of psychotherapy: The first involves our long-standing practice of solely working within theoretical orientations or eclectic combinations of orientations. Moreover, not agreeing with those having other frameworks on how to bring about therapeutic change results in the proliferation of schools of therapy (Goldfried, 1980). The second issue involves the longstanding gap between research and practice, where many therapists may fail to see the relevance to their day-to-day clinical practice and also where many researchers do not make systematic use of clinical observations as a means of guiding their research (Goldfried, 1982).2 The third issue is our tendency to neglect past contributions to the field (Goldfried, 2000). We do not build on our previous body of knowledge but rather rediscover what we already know or—even worse—ignore past work and replace it with something new. What follows is a description of how these three issues prevent psychotherapy from achieving a consensus, after which there will be a consideration of some possible steps that might be taken in working toward a resolution of these issues.

The article is here, behind a paywall.

Friday, October 26, 2018

Ethics, a Psychological Perspective

Andrea Dobson
Originally posted September 22, 2018

Key Takeaways
  • With emerging technologies like machine learning, developers can now achieve much more than ever before. But this new power has a down side. 
  • When we talk about ethics - the principles that govern a person's behaviour - it is impossible to not talk about psychology. 
  • Processes like obedience, conformity, moral disengagement, cognitive dissonance and moral amnesia all reveal why, though we see ourselves as inherently good, in certain circumstances we are likely to behave badly.
  • Recognising that although people aren’t rational, they are to a large degree predictable, has profound implications on how tech and business leaders can approach making their organisations more ethical.
  • The strongest way to make a company more ethical is to start with the individual. Companies become ethical one person at a time, one decision at a time. We all want to be seen as good people, known as our moral identity, which comes with the responsibility to have to act like it.

The Ethics Of Transhumanism And The Cult Of Futurist Biotech

Julian Vigo
Originally posted September 24, 2018

Here is an excerpt:

The philosophical tenets, academic theories, and institutional practices of transhumanism are well-known. Max More, a British philosopher and leader of the extropian movement claims that transhumanism is the “continuation and acceleration of the evolution of intelligent life beyond its currently human form and human limitations by means of science and technology, guided by life-promoting principles and values.” This very definition, however, is a paradox since the ethos of this movement is to promote life through that which is not life, even by removing pieces of life, to create something billed as meta-life. Indeed, it is clear that transhumanism banks on its own contradiction: that life is deficient as is, yet can be bettered by prolonging life even to the detriment of life.

Stefan Lorenz Sorgner is a German philosopher and bioethicist who has written widely on the ethical implications of transhumanism to include writings on cryonics and longevity of human life, all of which which go against most ecological principles given the amount of resources needed to keep a body in “suspended animation” post-death. At the heart of Sorgner’s writings, like those of Kyle Munkittrick, invoke an almost naïve rejection of death, noting that death is neither “natural” nor a part of human evolution. In fact, much of the writings on transhumanism take a radical approach to technology: anyone who dare question that cutting off healthy limbs to make make way for a super-Olympian sportsperson would be called a Luddite, anti-technology. But that is a false dichotomy since most critics of transhumanism are not against all technology, but question the ethics of any technology that interferes with the human rights of humans.

The info is here.

Thursday, October 25, 2018

Novartis links bonuses to ethics in bid to rebuild reputation

John Miller
Originally posted September 17, 2018

Swiss drugmaker Novartis (NOVN.S) has revealed its employees only get a bonus if they meet or exceed expectations for ethical behavior as it seeks to address past shortcomings that have damaged its reputation.

Chief Executive Vas Narasimhan has made strengthening the Swiss drugmaker’s ethics culture a priority after costly bribery scandals or legal settlements in South Korea, China and the United States.

Employees now receive a 1, 2 or a 3 score on their values and behavior. Receiving a 2, which Novartis said denotes meeting expectations, or a 3, for “role model” behavior, would make them eligible for a bonus of up to 35 percent of their total compensation.

Novartis said it began the scoring system in 2016 but details have not been widely reported. Company officials outlined the system on Monday on a call about its ethics efforts with analysts and journalists.

The info is here.

The Little-known Emotion that Makes Ethical Leadership Contagious

Notre Dame Cetner for Ethics
Originally posted in September 2018

Here is an excerpt:

Elevation at Work

Elevation is not limited to dramatic and dangerous situations. It can also arise in more mundane places like assembly lines, meeting rooms, and corporate offices. In fact, elevation is a powerful and often under-appreciated force that makes ethical leadership work. A 2010 study collected data from workers about their feelings toward their supervisors and found that bosses could cause their followers to experience elevation through acts of fairness and self-sacrifice. Elevation caused these workers to have positive feelings toward their bosses, and the effect spilled over into other relationships; they were kinder and more helpful toward their coworkers and more committed to their organization as a whole.

These findings suggest that elevation is a valuable emotion for leaders to understand. It can give ethical leadership traction by helping a leader's values and behaviors take root in his or her followers. One study puts it this way: "Elevation puts moral values into action."

Put it in Practice

The best way to harness elevation in your organization is by changing the way you communicate about ethics. Keep these guidelines in mind.

Find exemplars who elevate you and others.

Most companies have codes of values. But true moral inspiration comes from people, not from abstract principles. Although we need rules, guidelines, regulations, and laws, we are only inspired by the people who embody them and live them out. For each of your organization's values, make sure you can identify a person who exemplifies it in his or her life and work.

The info is here.

Wednesday, October 24, 2018

Open Letter: Netflix's "Afflicted" Abandoning Ethics and Science

Maya Dusenbery
Pacific Standard
Originally published September 20, 2018

Here are two excerpts:

The problem is not that the series included these skeptical views. To be sure, one of the most difficult parts of being ill with these "contested" conditions—or, for that matter, even a well-accepted but "invisible" chronic disease—is contending with such doubts, which are pervasive among friends and family, the media, and the medical profession at large. But according to the participants, in many cases, interviews with their family and friends were deceptively edited to make them appear more skeptical than they actually are. In some cases, clips in which family members acknowledged they'd wondered if their loved one's problem was psychological early on in their illness were taken out of context to imply they still harbored those beliefs. In others, producers seem to have put words into their mouths: According to Jamison, interviewees were asked to start their answers by repeating the question they had been asked. This is how the producers managed to get a clip of his mom seemingly questioning if "hypochondria" was a component of her son's illness.


Even more irresponsible is the inclusion of such psychological speculation by various unqualified doctors. Presented as experts despite the fact that they have not examined the participants and are not specialists in their particular conditions, they muse vaguely about the power of the mind to produce physical symptoms. A single psychiatrist, who has never evaluated any of the subjects, is quoted extensively throughout. In Episode 4, which is entitled "The Mind," he gets right to the point: "Statistically, it's more likely that the cause of the problem is a common psychiatric problem more than it is an unknown or uncatalogued physical illness. You can be deluded that you're sick, meaning you can believe you're sick when in fact you're not sick."

The info is here.

Chinese Ethics

Wong, David
The Stanford Encyclopedia of Philosophy (Fall 2018 Edition)

The tradition of Chinese ethical thought is centrally concerned with questions about how one ought to live: what goes into a worthwhile life, how to weigh duties toward family versus duties toward strangers, whether human nature is predisposed to be morally good or bad, how one ought to relate to the non-human world, the extent to which one ought to become involved in reforming the larger social and political structures of one’s society, and how one ought to conduct oneself when in a position of influence or power. The personal, social, and political are often intertwined in Chinese approaches to the subject. Anyone who wants to draw from the range of important traditions of thought on this subject needs to look seriously at the Chinese tradition. The canonical texts of that tradition have been memorized by schoolchildren in Asian societies for hundreds of years, and at the same time have served as objects of sophisticated and rigorous analysis by scholars and theoreticians rooted in widely variant traditions and approaches. This article will introduce ethical issues raised by some of the most influential texts in Confucianism, Mohism, Daoism, Legalism, and Chinese Buddhism.

The info is here.

Tuesday, October 23, 2018

James Gunn's Firing Is What Happens When We Outsource Morality to Capitalism

Anhar Karim
Originally posted September 16, 2018

Here is an excerpt:

A study last year from Cone Communications found that 87% of consumers said they’d purchase a company’s product if said company showed that they cared about issues consumers cared about. On the flip side of that, 75% of consumers said they would not buy from a company which showed they did not care. If business executives and CEOs are following along, as they surely are, the lesson is this: If a company wants to stay on top in the modern age, and if they want to maximize their profits, then they need to beat their competitors not only with superior products but also with demonstrated, superior moral behavior.

This, on its face, does not appear horrible. Indeed, this new development has led to a lot of undeniable good. It’s this idea that gave the #MeToo movement its bite and toppled industry giants such as Harvey Weinstein, Kevin Spacey and Les Moonves. It’s this strategy that’s led Warner Brothers to mandate an inclusion rider, Sony to diversify their comic titles, and Marvel to get their heroes to visit children in hospitals.

So how could any of this be negative?

Well, consider the other side of these attempts at corporate responsibility, the efforts that look good but help no one. What am I talking about? Consider that we recently had a major movie with a song celebrating difference and being true to yourself. That sounds good. However, the plot of the film is actually about exploiting minorities for profit. So it falls flat. Or consider that we had a woman cast in a Marvel franchise playing a role normally reserved for a man. Sounds progressive, right? Until we realize that that is also an example of a white actor trying her best to look Asian and thus limiting diversity. Also, consider that Sony decided to try and help fight back against bullying. Noble intent, but the way they went about it? They helped put up posters oddly suggesting that bullying could be stopped with sending positive emojis. Again, all of these sound sort of good on paper, but in practice, they help no one.

The info is here.

Why you need a code of ethics (and how to build one that sticks)

Josh Fruhlinger
Originally posted September 17, 2018

Here is an excerpt:

Most of us probably think of ourselves as ethical people. But within organizations built to maximize profits, many seemingly inevitably drift towards more dubious behavior, especially when it comes to user personal data. "More companies than not are collecting data just for the sake of collecting data, without having any reason as to why or what to do with it," says Philip Jones, a GDPR regulatory compliance expert at Capgemini. "Although this is an expensive and unethical approach, most businesses don’t think twice about it. I view this approach as one of the highest risks to companies today, because they have no clue where, how long, or how accurate much of their private data is on consumers."

This is the sort of organizational ethical drift that can arise in the absence of clear ethical guidelines—and it's the sort of drift that laws like the GDPR, the EU's stringent new framework for how companies must handle customer data, are meant to counter. And the temptation is certainly there to simply use such regulations as a de facto ethics policy. "The GDPR and laws like it make the process of creating a digital ethics policy much easier than it once was," says Ian McClarty, President and CEO of PhoenixNAP.  "Anything and everything that an organization does with personal data obtained from an individual must come with the explicit consent of that data owner. It’s very hard to subvert digital ethics when one’s ability to use personal data is curtailed in such a draconian fashion."

But companies cannot simply outsource their ethics codes to regulators and think that hewing to the letter of the law will keep their reputations intact. "New possibilities emerge so fast," says Mads Hennelund, a consultant at Nextwork, "that companies will be forced by market competition to apply new technologies before any regulator has been able to grasp them and impose meaningful rules or standards." He also notes that, if different silos within a company are left to their own devices and subject to their own particular forms of regulation and technology adoption, "the organization as a whole becomes ethically fragmented, consisting of multiple ethically autonomous departments."

The info is here.

Monday, October 22, 2018

Trump's 'America First' Policy Puts Economy Before Morality

Zeke Miller, Jonathan Lemire, and Catherine Lucey
Originally posted October 18, 20198

Here is an excerpt:

Still, Trump's transactional approach isn't sitting well with some of his Republican allies in Congress. His party for years championed the idea that the U.S. had a duty to promote U.S. values and human rights and even to intervene when they are challenged. Some Republicans have urged Trump not to abandon that view.

"I'm open to having Congress sit down with the president if this all turns out to be true, and it looks like it is, ... and saying, 'How can we express our condemnation without blowing up the Middle East?" Sen. John Kennedy, R-La., said. "Our foreign policy has to be anchored in values."

Trump dismisses the notion that he buddies up to dictators, but he does not express a sense that U.S. leadership extends beyond the U.S. border.

In an interview with CBS' "60 Minutes" that aired Sunday, he brushed aside his own assessment that Putin was "probably" involved in assassinations and poisonings.

"But I rely on them," he said. "It's not in our country."

Relations between the U.S. and Saudi Arabia are complex. The two nations are entwined on energy, military, economic and intelligence issues. The Trump administration has aggressively courted the Saudis for support of its Middle East agenda to counter Iranian influence, fight extremism and try to forge peace between Israel and the Palestinians.

The info is here.

Why the Gene Editors of Tomorrow Need to Study Ethics Today

Katie Palmer
Originally posted September 18, 2018

Two years after biochemist Jennifer Doudna helped introduce the world to the gene-editing tool known as Crispr, a 14-year-old from New Jersey turned it loose on a petri dish full of lung cancer cells, disrupting their ability to multiply. “In high school, I was all on the Crispr bandwagon,” says Jiwoo Lee, who won top awards at the 2016 Intel International Science and Engineering Fair for her work. “I was like, Crisprize everything!” Just pick a snippet of genetic material, add one of a few cut-and-paste proteins, and you’re ready to edit genomes. These days, though, Lee describes her approach as “more conservative.” Now a sophomore at Stanford, she spent part of her first year studying not just the science of Crispr but also the societal discussion around it. “Maybe I matured a little bit,” she says.

Doudna and Lee recently met at the Innovative Genomics Institute in Berkeley to discuss Crispr’s ethical implications. “She’s so different than I was at that age,” Doudna says. “I feel like I was completely clueless.” For Lee’s generation, it is critically important to start these conversations “at as early a stage as possible,” Doudna adds. She warns of a future in which humans take charge of evolution—both their own and that of other species. “The potential to use gene editing in germ cells or embryos is very real,” she says. Both women believe Crispr may eventually transform clinical medicine; Lee even hopes to build her career in that area—but she’s cautious. “I think there’s a really slippery slope between therapy and enhancement,” Lee says. “Every culture defines disease differently.” One country’s public health campaign could be another’s eugenics.

The info is here.

Sunday, October 21, 2018

Leaders matter morally: The role of ethical leadership in shaping employee moral cognition and misconduct.

Moore, C., Mayer, D. M., Chiang, and others
Journal of Applied Psychology. Advance online publication.


There has long been interest in how leaders influence the unethical behavior of those who they lead. However, research in this area has tended to focus on leaders’ direct influence over subordinate behavior, such as through role modeling or eliciting positive social exchange. We extend this research by examining how ethical leaders affect how employees construe morally problematic decisions, ultimately influencing their behavior. Across four studies, diverse in methods (lab and field) and national context (the United States and China), we find that ethical leadership decreases employees’ propensity to morally disengage, with ultimate effects on employees’ unethical decisions and deviant behavior. Further, employee moral identity moderates this mediated effect. However, the form of this moderation is not consistent. In Studies 2 and 4, we find that ethical leaders have the largest positive influence over individuals with a weak moral identity (providing a “saving grace”), whereas in Study 3, we find that ethical leaders have the largest positive influence over individuals with a strong moral identity (catalyzing a “virtuous synergy”). We use these findings to speculate about when ethical leaders might function as a “saving grace” versus a “virtuous synergy.” Together, our results suggest that employee misconduct stems from a complex interaction between employees, their leaders, and the context in which this relationship takes place, specifically via leaders’ influence over employees’ moral cognition.

Beginning of the Discussion section

Three primary findings emerge from these four studies. First, we consistently find a negative relationship between ethical leadership and employee moral disengagement. This supports our primary hypothesis: leader behavior is associated with how employees construe decisions with ethical import. Our manipulation of ethical leadership and its resulting effects provide confidence that ethical leadership has a direct causal influence over employee moral disengagement.

In addition, this finding was consistent in both American and Chinese work contexts, suggesting the effect is not culturally bound.

Second, we also found evidence across all four studies that moral disengagement functions as a mechanism to explain the relationship between ethical leadership and employee unethical decisions and behaviors. Again, this result was consistent across time- and respondent-separated field studies and an experiment, in American and Chinese organizations, and using different measures of our primary constructs, providing important assurance of the generalizability of our findings and bolstering our confidence that moral disengagement as an important, unique, and robust mechanism to explain ethical leaders’ positive effects within their organizations.

Finally, we found persistent evidence that the centrality of an employee’s moral identity plays a key role in the relationship between ethical leadership and employee unethical decisions and behavior (through moral disengagement). However, the nature of this moderated relationship varied across studies.

Saturday, October 20, 2018

Who should answer the ethical questions surrounding artificial intelligence?

Jack Karsten
Originally published September 14, 2018

Continuing advancements in artificial intelligence (AI) for use in both the public and private sectors warrant serious ethical consideration. As the capability of AI improves, the issues of transparency, fairness, privacy, and accountability associated with using these technologies become more serious. Many developers in the private sector acknowledge the threats AI poses and have created their own codes of ethics to monitor AI development responsibly. However, many experts believe government regulation may be required to resolve issues ranging from racial bias in facial recognition software to the use of autonomous weapons in warfare.

On Sept. 14, the Center for Technology Innovation hosted a panel discussion at the Brookings Institution to consider the ethical dilemmas of AI. Brookings scholars Christopher Meserole, Darrell West, and William Galston were joined by Charina Chou, the global policy lead for emerging technologies at Google, and Heather Patterson, a senior research scientist at Intel.

Enjoy the video

Friday, October 19, 2018

If Humility Is So Important, Why Are Leaders So Arrogant?

Bill Taylor
Harvard Business Review
Originally published October 15, 2018

Here is an excerpt:

With all due modesty, I’d offer a few answers to these vexing questions. For one thing, too many leaders think they can’t be humble and ambitious at the same time. One of the great benefits of becoming CEO of a company, head of a business unit, or leader of a team, the prevailing logic goes, is that you’re finally in charge of making things happen and delivering results. Edgar Schein, professor emeritus at MIT Sloan School of Management, and an expert on leadership and culture, once asked a group of his students what it means to be promoted to the rank of manager. “They said without hesitation, ‘It means I can now tell others what to do.’” Those are the roots of the know-it-all style of leadership. “Deep down, many of us believe that if you are not winning, you are losing,” Schein warns. The “tacit assumption” among executives “is that life is fundamentally and always a competition” — between companies, but also between individuals within companies. That’s not exactly a mindset that recognizes the virtues of humility.

In reality, of course, humility and ambition need not be at odds. Indeed, humility in the service of ambition is the most effective and sustainable mindset for leaders who aspire to do big things in a world filled with huge unknowns. Years ago, a group of HR professionals at IBM embraced a term to capture this mindset. The most effective leaders, they argued, exuded a sense of “humbition,” which they defined as “one part humility and one part ambition.” We “notice that by far the lion’s share of world-changing luminaries are humble people,” they wrote. “They focus on the work, not themselves. They seek success — they are ambitious — but they are humbled when it arrives…They feel lucky, not all-powerful.”

The info is here.

Risk Management Considerations When Treating Violent Patients

Kristen Lambert
Psychiatric News
Originally posted September 4, 2018

Here is an excerpt:

When a patient has a history of expressing homicidal ideation or has been violent previously, you should document, in every subsequent session, whether the patient admits or denies homicidal ideation. When the patient expresses homicidal ideation, document what he/she expressed and the steps you did or did not take in response and why. Should an incident occur, your documentation will play an important role in defending your actions.

Despite taking precautions, your patient may still commit a violent act. The following are some strategies that may minimize your risk.

  • Conduct complete timely/thorough risk assessments.
  • Document, including the reasons for taking and not taking certain actions.
  • Understand your state’s law on duty to warn. Be aware of the language in the law on whether you have a mandatory, permissive, or no duty to warn/protect.
  • Understand your state’s laws regarding civil commitment.
  • Understand your state’s laws regarding disclosure of confidential information and when you can do so.
  • Understand your state’s laws regarding discussing firearms ownership and/or possession with patients.
  • If you have questions, consult an attorney or risk management professional.

Thursday, October 18, 2018

Medicine’s Financial Contamination

Editorial Board
The New York Times
Originally posted September 14, 2018

Here is an excerpt:

Sloan Kettering’s other leaders were well aware of these relationships. The hospital has said that it takes pains to wall off any employee involved with a given outside company from the hospital’s dealings with that company. But it’s difficult to believe that conflicts of this magnitude could have truly been worked around, given how many of them there were, and how high up on the organizational chart Dr. Baselga sat. It also strains credulity to suggest that he was the hospital’s only leader with such conflicts or with such apparent difficulty disclosing them. After the initial report, but before Dr. Baselga’s resignation, the hospital sent a letter to its entire 17,000-person staff acknowledging that the institution as a whole needed to do better. It remains to be seen what additional actions will be taken — and by whom — to repair the situation.

Financial conflicts are hardly confined to Sloan Kettering. A 2015 study in The BMJ found that a “substantial number” of academic leaders hold directorships that pay as much as or more than their clinical salaries. According to other surveys, nearly 70 percent of oncologists who speak at national meetings, nearly 70 percent of psychiatrists on the task force that ultimately decides what treatments should be recommended for what mental illnesses, and a significant number of doctors on Food and Drug Administration advisory committees have financial ties to the drug and medical device industries. As bioethicists have warned and as journal publishers have long acknowledged, not all of them report those ties when and where they are supposed to.

The info is here.

When You Fear Your Company Has Forgotten Its Principles

Sue Shellenbarger
The Wall Street Journal
Originally published September 17, 2018

Here is an excerpt:

People who object on principle to their employers’ conduct face many obstacles. One is the bystander effect—people’s reluctance to intervene against wrongdoing when others are present and witnessing it too, Dr. Grant says. Ask yourself in such cases, “If no one acted here, what would be the consequences?” he says. While most people think first about potential damage to their reputation and relationships, the long-term effects could be worse, he says.

Be careful not to argue too passionately for the changes you want, Dr. Grant says. Show respect for others’ viewpoint, and acknowledge the flaws in your argument to show you’ve thought it through carefully.

Be open about your concerns, says Jonah Sachs, an Oakland, Calif., speaker and author of “Unsafe Thinking,” a book on creative risk-taking. People who complain in secret are more likely to make enemies and be seen as disloyal, compared with those who resist in the open, research shows.

Successful change-makers tend to frame proposed changes as benefiting the entire company and its employees and customers, rather than just themselves, Mr. Sachs says. He cites a former executive at a retail drug chain who helped persuade top management to stop selling cigarettes in its stores. While the move tracked with the company’s health-focused mission, the executive strengthened her case by correctly predicting that it would attract more health-minded customers.

The info is here.

Wednesday, October 17, 2018

Huge price hikes by drug companies are immoral

Robert Klitzman
Originally posted September 18, 2018

Several pharmaceutical companies have been jacking up the prices of their drugs in unethical ways. Most recently, Nirmal Mulye, founder and president of Nostrum Pharmaceuticals, defended his decision to more than quadruple the price of nitrofurantoin, used to treat bladder infections, from about $500 to more than $2,300 a bottle. He said it was his "moral requirement to sell the product at the highest price."

Mulye argues that his only moral duty is to benefit his investors. As he said in defending Martin Shkreli, who in 2015 raised the price of an anti-parasite drug, daraprim, 5,000% from $13.50 to $750 per tablet, "When he raised the price of his drug he was within his rights because he had to reward his shareholders."

Mulye is wrong for many reasons. Drug companies deserve reasonable return on their investment in research and development, but some of these companies are abusing the system. The development of countless new drugs depends on taxpayer money and sacrifices that patients in studies make in good faith. Excessive price hikes harm many people, threaten public health and deplete huge amounts of taxpayer money that could be better used in other ways.

The US government pays more than 40% of all Americans' prescription costs, and this amount has been growing faster than inflation. In 2015, over 118 million Americans were on some form of government health insurance, including around 52 million on Medicare and 62 million on Medicaid. And these numbers have been increasing. Today, around 59 million Americans are on Medicare and 75 million on Medicaid.

The info is here.

Machine Ethics and Artificial Moral Agents

Francesco Corea
Originally posted July 6, 2017

Here is an excerpt:

However, let’s look at the problem from a different angle. I was educated as an economist, so allow me to start my argument with this statement: let’s assume we have the perfect dataset. It is not only omni-comprehensive but also clean, consistent and deep both longitudinally and temporally speaking.

Even in this case, we have no guarantee AI won’t learn the same bias autonomously as we did. In other words, removing biases by hand or by construction is not a guarantee of those biases to not come out again spontaneously.

This possibility also raises another (philosophical) question: we are building this argument from the assumption that biases are bad (mostly). So let’s say the machines come up with a result we see as biased, and therefore we reset them and start again the analysis with new data. But the machines come up with a similarly ‘biased result’. Would we then be open to accepting that as true and revision what we consider to be biased?

This is basically a cultural and philosophical clash between two different species.

In other words, I believe that two of the reasons why embedding ethics into machine designing is extremely hard is that i) we don’t really know unanimously what ethics is, and ii) we should be open to admit that our values or ethics might not be completely right and that what we consider to be biased is not the exception but rather the norm.

Developing a (general) AI is making us think about those problems and it will change (if it hasn’t already started) our values system. And perhaps, who knows, we will end up learning something from machines’ ethics as well.

The info is here.

Tuesday, October 16, 2018

Let's Talk About AI Ethics; We're On A Deadline

Tom Vander Ark
Originally posted September 13, 2018

Here is an excerpt:

Creating Values-Aligned AI

“The project of creating value-aligned AI is perhaps one of the most important things we will ever do,” said the Future of Life Institute. It’s not just about useful intelligence but “the ends to which intelligence is aimed and the social/political context, rules, and policies in and through which this all happens.”

The Institute created a visual map of interdisciplinary issues to be addressed:

  • Validation: ensuring that the right system specification is provided for the core of the agent given stakeholders' goals for the system.
  • Security: applying cyber security paradigms and techniques to AI-specific challenges.
  • Control: structural methods for operators to maintain control over advanced agents.
  • Foundations: foundational mathematical or philosophical problems that have bearing on multiple facets of safety
  • Verification: techniques that help prove a system was implemented correctly given a formal specification
  • Ethics: effort to understand what we ought to do and what counts as moral or good.
  • Governance: the norms and values held by society, which are structured through various formal and informal processes of decision-making to ensure accountability, stability, broad participation, and the rule of law.

Nudge or Grudge? Choice Architecture and Parental Decision‐Making

Jennifer Blumenthal‐Barby and Douglas J. Opel
The Hastings Center Report
Originally published March 28, 2018


Richard Thaler and Cass Sunstein define a nudge as “any aspect of the choice architecture that alters people's behavior in a predictable way without forbidding any options or significantly changing their economic incentives.” Much has been written about the ethics of nudging competent adult patients. Less has been written about the ethics of nudging surrogates’ decision‐making and how the ethical considerations and arguments in that context might differ. Even less has been written about nudging surrogate decision‐making in the context of pediatrics, despite fundamental differences that exist between the pediatric and adult contexts. Yet, as the field of behavioral economics matures and its insights become more established and well‐known, nudges will become more crafted, sophisticated, intentional, and targeted. Thus, the time is now for reflection and ethical analysis regarding the appropriateness of nudges in pediatrics.

We argue that there is an even stronger ethical justification for nudging in parental decision‐making than with competent adult patients deciding for themselves. We give three main reasons in support of this: (1) child patients do not have autonomy that can be violated (a concern with some nudges), and nudging need not violate parental decision‐making authority; (2) nudging can help fulfill pediatric clinicians’ obligations to ensure parental decisions are in the child's interests, particularly in contexts where there is high certainty that a recommended intervention is low risk and of high benefit; and (3) nudging can relieve parents’ decisional burden regarding what is best for their child, particularly with decisions that have implications for public health.

The info is here.

Monday, October 15, 2018

ICP Ethics Code

Institute of Contemporary Psychoanalysis

Psychoanalysts strive to reduce suffering and promote self-understanding, while respecting human dignity. Above all, we take care to do no harm. Working in the uncertain realm of unconscious emotions and feelings, our exclusive focus must be on safeguarding and benefitting our patients as we try to help them understand their unconscious mental life. Our mandate requires us to err on the side of ethical caution. As clinicians who help people understand the meaning of their dreams and unconscious longings, we are aware of our power and sway. We acknowledge a special obligation to protect people from unintended harm resulting from our own human foibles.

In recognition of our professional mandate and our authority—and the private, subjective and influential nature of our work—we commit to upholding the highest ethical standards. These standards take the guesswork out of how best to create a safe container for psychoanalysis. These ethical principles inspire tolerant and respectful behaviors, which in turn facilitate the health and safety of our candidates, members and, most especially, our patients. Ultimately, ethical behavior protects us from ourselves, while preserving the integrity of our institute and profession.

Professional misconduct is not permitted, including, but not limited to dishonesty, discrimination and boundary violations. Members are asked to keep firmly in mind our core values of personal integrity, tolerance and respect for others. These values are critical to fulfilling our mission as practitioners and educators of psychoanalytic therapy. Prejudice is never tolerated whether on the basis of age, disability, ethnicity, gender, gender identity, race, religion, sexual orientation or social class. Institute decisions (candidate advancement, professional opportunities, etc.) are to be made exclusively on the basis of merit or seniority. Boundary violations, including, but not limited to sexual misconduct, undue influence, exploitation, harassment and the illegal breaking of confidentiality, are not permitted. Members are encouraged to seek consultation readily when grappling with any ethical or clinical concerns. Participatory democracy is a primary value of ICP. All members and candidates have the responsibility for knowing these guidelines, adhering to them and helping other members comply with them.

The ethics code is here.

Big Island considers adding honesty policy to ethics code

Associated Press
Originally posted September 14, 2018

Big Island officials are considering adding language to the county's ethics code requiring officers and employees to provide the public with information that is accurate and factual.

The county council voted last week in support of the measure, requiring county employees to provide honest information to "the best of each officer's or employee's abilities and knowledge," West Hawaii Today reported . It's set to go before council for final approval next week.

The current measure has changed from Puna Councilwoman Eileen O'Hara's original bill that simply stated "officers and employees should be truthful."

She introduced the measure in response to residents' concerns, but amended it to gain the support of her colleagues, she said.

The info is here.

Sunday, October 14, 2018

The Myth of Freedom

Yuval Noah Harari
The Guardian
Originally posted September 14, 2018

Here is an excerpt:

Unfortunately, “free will” isn’t a scientific reality. It is a myth inherited from Christian theology. Theologians developed the idea of “free will” to explain why God is right to punish sinners for their bad choices and reward saints for their good choices. If our choices aren’t made freely, why should God punish or reward us for them? According to the theologians, it is reasonable for God to do so, because our choices reflect the free will of our eternal souls, which are independent of all physical and biological constraints.

This myth has little to do with what science now teaches us about Homo sapiens and other animals. Humans certainly have a will – but it isn’t free. You cannot decide what desires you have. You don’t decide to be introvert or extrovert, easy-going or anxious, gay or straight. Humans make choices – but they are never independent choices. Every choice depends on a lot of biological, social and personal conditions that you cannot determine for yourself. I can choose what to eat, whom to marry and whom to vote for, but these choices are determined in part by my genes, my biochemistry, my gender, my family background, my national culture, etc – and I didn’t choose which genes or family to have.

This is not abstract theory. You can witness this easily. Just observe the next thought that pops up in your mind. Where did it come from? Did you freely choose to think it? Obviously not. If you carefully observe your own mind, you come to realise that you have little control of what’s going on there, and you are not choosing freely what to think, what to feel, and what to want.

Though “free will” was always a myth, in previous centuries it was a helpful one. It emboldened people who had to fight against the Inquisition, the divine right of kings, the KGB and the KKK. The myth also carried few costs. In 1776 or 1945 there was relatively little harm in believing that your feelings and choices were the product of some “free will” rather than the result of biochemistry and neurology.

But now the belief in “free will” suddenly becomes dangerous. If governments and corporations succeed in hacking the human animal, the easiest people to manipulate will be those who believe in free will.

The info is here.

Saturday, October 13, 2018

A Top Goldman Banker Raised Ethics Concerns. Then He Was Gone.

Emily Flitter, Kate Kelly and David Enrich
The New York Times
Originally posted September 11, 2018

By the tight-lipped standards of Goldman Sachs, the phone call from one of the firm’s most senior investment bankers was explosive.

James C. Katzman, a Goldman partner and the leader of its West Coast mergers-and-acquisitions practice, dialed the bank’s whistle-blower hotline in 2014 to complain about what he regarded as a range of unethical practices, according to accounts by people close to Mr. Katzman, which a Goldman spokesman confirmed. His grievances included an effort by Goldman to hire a customer’s child and colleagues’ repeated attempts to obtain and then share confidential client information.

Mr. Katzman expected lawyers at the firm Fried, Frank, Harris, Shriver & Jacobson, which monitored the hotline, to investigate his allegations and share them with independent members of Goldman’s board of directors, the people close to Mr. Katzman said.

The complaints were an extraordinary example of a senior employee’s taking on what he perceived to be corporate wrongdoing at an elite Wall Street bank. But they were never independently investigated or fully relayed to the Goldman board.

The information is here.

Friday, October 12, 2018

The New Standardized Morality Test. Really.

Peter Greene
Forbes - Education
Originally published September 13, 2018

Here is an excerpt:

Morality is sticky and complicated, and I'm not going to pin it down here. It's one thing to manage your own moral growth and another thing to foster the moral development of family and friends and still quite another thing to have a company hired by a government draft up morality curriculum that will be delivered by yet another wing of the government. And it is yet another other thing to create a standardized test by which to give students morality scores.

But the folks at ACT say they will "leverage the expertise of U.S.-based research and test development teams to create the assessment, which will utilize the latest theory and principles of social and emotional learning (SEL) through the development process." That is quite a pile of jargon to dress up "We're going to cobble together a test to measure how moral a student is. The test will be based on stuff."

ACT Chief Commercial Officer Suzana Delanghe is quoted saying "We are thrilled to be supporting a holistic approach to student success" and promises that they will create a "world class assessment that measures UAE student readiness" because even an ACT manager knows better than to say that they're going to write a standardized test for morality.

The info is here.

Americans Are Shifting The Rest Of Their Identity To Match Their Politics

Perry Bacon Jr.
Originally posted September 11, 2018

Here is an excerpt:

In a recently published book, the University of Pennsylvania’s Michele Margolis makes a case similar to Egan’s, specifically about religion: Her research found, for example, that church attendance by Democrats declined between 2002 and 2004, when then-President Bush and Republicans were emphasizing Bush’s faith and how it connected to his opposition to abortion and gay marriage.

I don’t want to overemphasize the results of these studies. Egan still believes that the primary dynamic in politics and identity is that people change parties to match their other identities. But I think Egan’s analysis is in line with a lot of emerging political science that finds U.S. politics is now a fight about identity and culture (and perhaps it always was). Increasingly, the political party you belong to represents a big part of your identity and is not just a reflection of your political views. It may even be your most important identity.

Asked what he thinks the implications of his research are, Egan said that he shies away from saying whether the results are “good or bad.” “I don’t think one kind of identity (say ethnicity or religion) is necessarily more authentic than another (e.g., ideology or party),” he said in an email to FiveThirtyEight.

The info is here.

This information is very important to better understand your patients and identity development.

Thursday, October 11, 2018

Pharma exec had 'moral requirement' to raise price 400%

Wayne Drash
Originally published September 12, 2018

 A pharmaceutical company executive defended his company's recent 400% drug price increase, telling the Financial Times that his company had a "moral requirement to sell the product at the highest price." The head of the US Food and Drug Administration blasted the executive in a response on Twitter.

Nirmal Mulye, founder and president of Nostrum Pharmaceuticals, commented in a story Tuesday about the decision to raise the price of an antibiotic mixture called nitrofurantoin from about $500 per bottle to more than $2,300. The drug is listed by the World Health Organization as an "essential" medicine for lower urinary tract infections.

"I think it is a moral requirement to make money when you can," Mulye told the Financial Times, "to sell the product for the highest price."

The info is here.

Does your nonprofit have a code of ethics that works?

Mary Beth West
USA Today Network - Tennessee
Originally posted September 10, 2018

Each year, the Public Relations Society of America recognizes September as ethics month.

Our present #FakeNews / #MeToo era offers a daily diet of news coverage and exposés about ethics shortfalls in business, media and government sectors.

One arena sometimes overlooked is that of nonprofit organizations.

I am currently involved in a national ethics-driven bylaw reform movement for PRSA itself, which is a 501(c)(6) nonprofit with 21,000-plus members globally, in the “business league” category.

While PRSA’s code of ethics has stood for decades as an industry standard for communications ethics – promoting members’ adherence to only truthful and honest practices – PRSA’s code is not enforceable.

Challenges with unenforced ethics codes

Unenforced codes of ethics are commonplace in the nonprofit arena, particularly for volunteer, member-driven organizations.

PRSA converted from its enforced code of ethics to one that is unenforced by design, nearly two decades ago.

The reason: enforcing code compliance and the adjudication processes inherent to it were a pain in the neck (and a pain in the wallet, due to litigation risks).

The info is here.

Wednesday, October 10, 2018

Psychologists Are Standing Up Against Torture at Gitmo

Rebecca Gordon
Originally posted September 11, 2018

Sometimes the good guys do win. That’s what happened on August 8 in San Francisco when the Council of Representatives of the American Psychological Association (APA) decided to extend a policy keeping its members out of the US detention center at Guantánamo Bay, Cuba.

The APA’s decision is important—and not just symbolically. Today we have a president who has promised to bring back torture and “load up” Guantánamo “with some bad dudes.” When healing professionals refuse to work there, they are standing up for human rights and against torture.

It wasn’t always so. In the early days of Guantánamo, military psychologists contributed to detainee interrogations there. It was for Guantánamo that Defense Secretary Donald Rumsfeld approved multiple torture methods, including among others excruciating stress positions, prolonged isolation, sensory deprivation, and enforced nudity. Military psychologists advised on which techniques would take advantage of the weaknesses of individual detainees. And it was two psychologists, one an APA member, who designed the CIA’s whole “enhanced interrogation program.”

The info is here.

Urban Meyer, Ohio State Football, and How Leaders Ignore Unethical Behavior

David Mayer
Harvard Business Review
Originally posted September 4, 2018

Here is an excerpt:

A sizable literature in management and psychology helps us understand how people become susceptible to moral biases and make choices that are inconsistent with their values and the values of their organizations. Reading the report with that lens can help leaders better understand the biases that get in the way of ethical conduct and ethical organizations.

Performance over principles. One number may surpass all other details in this case: 90%. That’s the percentage of games the team has won under Meyer as head coach since he joined Ohio State in 2012. Psychological research shows that in almost every area of life, being moral is weighted as more important than being competent. However, in competitive environments such as work and sports, the classic findings flip: competence is prized over character. Although the report does not mention anything about the team’s performance or the resulting financial and reputational benefits of winning, the program’s success may have crowded out concerns over the allegations against Smith and about the many other problematic behaviors he showed.

Unspoken values. Another factor that can increase the likelihood of making unethical decisions is the absence of language around values. Classic research in organizations has found that leaders tend to be reluctant to use “moral language.” For example, leaders are more likely to talk about deadlines, objectives, and effectiveness than values such as integrity, respect, and compassion. Over time, this can license unethical conduct.

The info is here.

Tuesday, October 9, 2018

Morality is the new profit – banks must learn or die

Zoe Williams
The Guardian
Originally posted September 10, 2018

Here is an excerpt:

Ten years ago, “ethical” investing meant not buying shares in arms and alcohol, as if morality were so unfamiliar to financial decision-making that you had to go back to the 19th century and borrow it from the Quakers. The growth of banks with a moral mission – like Triodos (“quality of life, human dignity, sustainability”) – or investments with a social purpose – like Abundance, which finances renewable energy – has been impressive on its own terms, but remained niche, for baby boomers with a conscience. The idea that all market activity should have a purpose other than profit is roughly where it always was on the spectrum, somewhere between Marx and Jesus – one for the rioters, the subversives, the people with beards, unsuited to mainstream discourse.

But there is nothing more pragmatic and less idealistic than to insist on the social purpose of the market; banking cannot survive without it – not as a corporate bolt-on but as its driving and decisive motivation. The derivatives trade cannot weather the consequences of infinite self-interest, because there really will be consequences – extreme global ones. The planet cannot survive an endless cost-benefit analysis in which nature is pitted against profit. Nature will always lose and so will humanity as a result. Whatever the immediate cause of the next crash, if and when it comes its roots will be environmental. The Financial Times talks about “the insidious danger that pension funds deflate, leaving a generation without enough money to retire”. The most likely cause for that devaluation of pensions – leaving aside the generation that cannot afford to save for the future – will be stranded assets, pension funds having invested in fossil fuels that cannot be excavated.

The info is here.

Top Cancer Researcher Fails to Disclose Corporate Financial Ties in Major Research Journals

Charles Ornstein and Katie Thomas
The New York Times
Originally published September 8, 2018

One of the world’s top breast cancer doctors failed to disclose millions of dollars in payments from drug and health care companies in recent years, omitting his financial ties from dozens of research articles in prestigious publications like The New England Journal of Medicine and The Lancet.

The researcher, Dr. José Baselga, a towering figure in the cancer world, is the chief medical officer at Memorial Sloan Kettering Cancer Center in New York. He has held board memberships or advisory roles with Roche and Bristol-Myers Squibb, among other corporations, has had a stake in start-ups testing cancer therapies, and played a key role in the development of breakthrough drugs that have revolutionized treatments for breast cancer.

According to an analysis by The New York Times and ProPublica, Dr. Baselga did not follow financial disclosure rules set by the American Association for Cancer Research when he was president of the group. He also left out payments he received from companies connected to cancer research in his articles published in the group’s journal, Cancer Discovery. At the same time, he has been one of the journal’s two editors in chief.

The info is here.

Monday, October 8, 2018

Purpose, Meaning and Morality Without God

Ralph Lewis
Psychology Today Blog
Originally posted September 9, 2018

Here is an excerpt:

Religion is not the source of purpose, meaning and morality. Rather, religion can be understood as having incorporated these natural motivational and social dispositions and having coevolved with human cultures over time. Unsurprisingly, religion has also incorporated our more selfish, aggressive, competitive, and xenophobic human proclivities.

Modern secular societies with the lowest levels of religious belief have achieved far more compassion and flourishing than religious ones.

Secular humanists understand that societal ethics and compassion are achieved solely through human action in a fully natural world. We can rely only on ourselves and our fellow human beings. All we have is each other, huddled together on this lifeboat of a little planet in this vast indifferent universe.

We will need to continue to work actively toward the collective goal of more caring societies in order to further strengthen the progress of our species.

Far from being nihilistic, the fully naturalist worldview of secular humanism empowers us and liberates us from our irrational fears, and from our feelings of abandonment by the god we were told would take care of us, and motivates us to live with a sense of interdependent humanistic purpose. This deepens our feelings of value, engagement, and relatedness. People can and do care, even if universe doesn’t.

The blog post is here.

Evolutionary Psychology

Downes, Stephen M.
The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.)

Evolutionary psychology is one of many biologically informed approaches to the study of human behavior. Along with cognitive psychologists, evolutionary psychologists propose that much, if not all, of our behavior can be explained by appeal to internal psychological mechanisms. What distinguishes evolutionary psychologists from many cognitive psychologists is the proposal that the relevant internal mechanisms are adaptations—products of natural selection—that helped our ancestors get around the world, survive and reproduce. To understand the central claims of evolutionary psychology we require an understanding of some key concepts in evolutionary biology, cognitive psychology, philosophy of science and philosophy of mind. Philosophers are interested in evolutionary psychology for a number of reasons. For philosophers of science —mostly philosophers of biology—evolutionary psychology provides a critical target. There is a broad consensus among philosophers of science that evolutionary psychology is a deeply flawed enterprise. For philosophers of mind and cognitive science evolutionary psychology has been a source of empirical hypotheses about cognitive architecture and specific components of that architecture. Philosophers of mind are also critical of evolutionary psychology but their criticisms are not as all-encompassing as those presented by philosophers of biology. Evolutionary psychology is also invoked by philosophers interested in moral psychology both as a source of empirical hypotheses and as a critical target.

The entry is here.

Sunday, October 7, 2018

Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?

Zerilli, J., Knott, A., Maclaurin, J. et al.
Philos. Technol. (2018).


We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The demands of practical reason require the justification of action to be pitched at the level of practical reason. Decision tools that support or supplant practical reasoning should not be expected to aim higher than this. We cast this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argue that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. In practice, this means that the sorts of explanations for algorithmic decisions that are analogous to intentional stance explanations should be preferred over ones that aim at the architectural innards of a decision tool.

The article is here.

Saturday, October 6, 2018

Certainty Is Primarily Determined by Past Performance During Concept Learning

Louis Martí, Francis Mollica, Steven Piantadosi and Celeste Kidd
Open Mind: Discoveries in Cognitive Science
Posted Online August 16, 2018


Prior research has yielded mixed findings on whether learners’ certainty reflects veridical probabilities from observed evidence. We compared predictions from an idealized model of learning to humans’ subjective reports of certainty during a Boolean concept-learning task in order to examine subjective certainty over the course of abstract, logical concept learning. Our analysis evaluated theoretically motivated potential predictors of certainty to determine how well each predicted participants’ subjective reports of certainty. Regression analyses that controlled for individual differences demonstrated that despite learning curves tracking the ideal learning models, reported certainty was best explained by performance rather than measures derived from a learning model. In particular, participants’ confidence was driven primarily by how well they observed themselves doing, not by idealized statistical inferences made from the data they observed.

Download the pdf here.

Key Points: In order to learn and understand, you need to use all the data you have accumulated, not just the feedback on your most recent performance.  In this way, feedback, rather than hard evidence, increases a person's sense of certainty when learning new things, or how to tell right from wrong.

Fascinating research, I hope I am interpreting it correctly.  I am not all that certain.

Friday, October 5, 2018

Nike picks a side in America’s culture wars

Andrew Edgecliffe-Johnson
Financial Times
Originally posted September 7, 2018

Here is an excerpt:

This is Nike’s second reason to be confident: drill down into this week’s polls and they show that support for Nike and Kaepernick is strongest among millennial or Gen-Z, African-American, liberal urbanites — the group Nike targets. The company’s biggest risk is becoming “mainstream, the usual, everywhere, tamed”, Prof Lee says. Courting controversy forces its most dedicated fans to defend it and catches the eye of more neutral consumers.

Finally, Nike will have been encouraged by studies showing that consumers reward brands for speaking up on divisive social issues. But it is doing something more novel and calculated than other multinationals that have weighed in on immigration, gun control or race: it did not stumble into this controversy; it sought it.

A polarised populace is a fact of life for brands, in the US and beyond. That leaves them with a choice: try to carry on catering to a vanishing mass-market middle ground, or stake out a position that will infuriate one side but excite the other. The latter strategy has worked for politicians such as Mr Trump. Unlike elected officials, a brand can win with far less than 50.1 per cent of the population behind it. (Nike chief executive Mark Parker told investors last year that it was looking to just 12 global cities to drive 80 per cent of its growth.)

The info is here.

Should Artificial Intelligence Augment Medical Decision Making? The Case for an Autonomy Algorithm

Camillo Lamanna and Lauren Byrne
AMA J Ethics. 2018;20(9):E902-910.


A significant proportion of elderly and psychiatric patients do not have the capacity to make health care decisions. We suggest that machine learning technologies could be harnessed to integrate data mined from electronic health records (EHRs) and social media in order to estimate the confidence of the prediction that a patient would consent to a given treatment. We call this process, which takes data about patients as input and derives a confidence estimate for a particular patient’s predicted health care-related decision as an output, the autonomy algorithm. We suggest that the proposed algorithm would result in more accurate predictions than existing methods, which are resource intensive and consider only small patient cohorts. This algorithm could become a valuable tool in medical decision-making processes, augmenting the capacity of all people to make health care decisions in difficult situations.

The article is here.

Thursday, October 4, 2018

7 Short-Term AI ethics questions

Orlando Torres
Originally posted April 4, 2018

Here is an excerpt:

2. Transparency of Algorithms

Even more worrying than the fact that companies won’t allow their algorithms to be publicly scrutinized, is the fact that some algorithms are obscure even to their creators.
Deep learning is a rapidly growing technique in machine learning that makes very good predictions, but is not really able to explain why it made any particular prediction.

For example, some algorithms haven been used to fire teachers, without being able to give them an explanation of why the model indicated they should be fired.

How can we we balance the need for more accurate algorithms with the need for transparency towards people who are being affected by these algorithms? If necessary, are we willing to sacrifice accuracy for transparency, as Europe’s new General Data Protection Regulation may do? If it’s true that humans are likely unaware of their true motives for acting, should we demand machines be better at this than we actually are?

3. Supremacy of Algorithms

A similar but slightly different concern emerges from the previous two issues. If we start trusting algorithms to make decisions, who will have the final word on important decisions? Will it be humans, or algorithms?

For example, some algorithms are already being used to determine prison sentences. Given that we know judges’ decisions are affected by their moods, some people may argue that judges should be replaced with “robojudges”. However, a ProPublica study found that one of these popular sentencing algorithms was highly biased against blacks. To find a “risk score”, the algorithm uses inputs about a defendant’s acquaintances that would never be accepted as traditional evidence.

The info is here.

Shouldn’t We Make It Easy to Use Behavioral Science for Good?

Manasee Desai
Originally posted September 4, 2018

The evidence showing that applied behavioral science is a powerful tool for creating social good is growing rapidly. As a result, it’s become much more common for the world’s problem solvers to apply a behavioral lens to their work. Yet this approach can still feel distant to the people trying urgently to improve lives on a daily basis—those working for governments, nonprofits, and other organizations that directly tackle some of the most challenging and pervasive problems facing us today.

All too often, effective strategies for change are either locked behind paywalls or buried in inaccessible, jargon-laden articles. And because of the sheer volume of behavioral solutions being tested now, even people working in the fields that compose the behavioral sciences—like me, for instance—cannot possibly stay on top of every new intervention or application happening across countless fields and countries. This means missed opportunities to apply and scale effective interventions and to do more good in the world.

As a field, figuring out how to effectively report and communicate what we’ve learned from our research and interventions is our own “last mile” problem.

While there is no silver bullet for the problems the world faces, the behavioral science community should (and can) come together to make our battle-tested solutions available to problem solvers, right at their fingertips. Expanding the adoption of behavioral design for social good requires freeing solutions from dense journals and cost-prohibitive paywalls. It also requires distilling complex designs into simpler steps—uniting a community that is passionate about social impact and making the world a better place with applied behavioral science.

That is the aim of the Behavioral Evidence Hub (B-Hub), a curated, open-source digital collection of behavioral interventions proven to impact real-world problems.

The info is here.

Wednesday, October 3, 2018

Association Between Physician Burnout and Patient Safety, Professionalism, and Patient Satisfaction: A Systematic Review and Meta-analysis.

Maria Panagioti, PhD; Keith Geraghty, PhD; Judith Johnson, PhD; et al
JAMA Intern Med. Published online September 4, 2018.


Importance  Physician burnout has taken the form of an epidemic that may affect core domains of health care delivery, including patient safety, quality of care, and patient satisfaction. However, this evidence has not been systematically quantified.

Objective  To examine whether physician burnout is associated with an increased risk of patient safety incidents, suboptimal care outcomes due to low professionalism, and lower patient satisfaction.

Data Sources  MEDLINE, EMBASE, PsycInfo, and CINAHL databases were searched until October 22, 2017, using combinations of the key terms physicians, burnout, and patient care. Detailed standardized searches with no language restriction were undertaken. The reference lists of eligible studies and other relevant systematic reviews were hand-searched.

Data Extraction and Synthesis  Two independent reviewers were involved. The main meta-analysis was followed by subgroup and sensitivity analyses. All analyses were performed using random-effects models. Formal tests for heterogeneity (I2) and publication bias were performed.

Main Outcomes and Measures  The core outcomes were the quantitative associations between burnout and patient safety, professionalism, and patient satisfaction reported as odds ratios (ORs) with their 95% CIs.

Results  Of the 5234 records identified, 47 studies on 42 473 physicians (25 059 [59.0%] men; median age, 38 years [range, 27-53 years]) were included in the meta-analysis. Physician burnout was associated with an increased risk of patient safety incidents (OR, 1.96; 95% CI, 1.59-2.40), poorer quality of care due to low professionalism (OR, 2.31; 95% CI, 1.87-2.85), and reduced patient satisfaction (OR, 2.28; 95% CI, 1.42-3.68). The heterogeneity was high and the study quality was low to moderate. The links between burnout and low professionalism were larger in residents and early-career (≤5 years post residency) physicians compared with middle- and late-career physicians (Cohen Q = 7.27; P = .003). The reporting method of patient safety incidents and professionalism (physician-reported vs system-recorded) significantly influenced the main results (Cohen Q = 8.14; P = .007).

Conclusions and Relevance  This meta-analysis provides evidence that physician burnout may jeopardize patient care; reversal of this risk has to be viewed as a fundamental health care policy goal across the globe. Health care organizations are encouraged to invest in efforts to improve physician wellness, particularly for early-career physicians. The methods of recording patient care quality and safety outcomes require improvements to concisely capture the outcome of burnout on the performance of health care organizations.

The research is here.

Moral Reasoning

Richardson, Henry S.
The Stanford Encyclopedia of Philosophy (Fall 2018 Edition), Edward N. Zalta (ed.)

Here are two brief excerpts:

Moral considerations often conflict with one another. So do moral principles and moral commitments. Assuming that filial loyalty and patriotism are moral considerations, then Sartre’s student faces a moral conflict. Recall that it is one thing to model the metaphysics of morality or the truth conditions of moral statements and another to give an account of moral reasoning. In now looking at conflicting considerations, our interest here remains with the latter and not the former. Our principal interest is in ways that we need to structure or think about conflicting considerations in order to negotiate well our reasoning involving them.


Understanding the notion of one duty overriding another in this way puts us in a position to take up the topic of moral dilemmas. Since this topic is covered in a separate article, here we may simply take up one attractive definition of a moral dilemma. Sinnott-Armstrong (1988) suggested that a moral dilemma is a situation in which the following are true of a single agent:

  1. He ought to do A.
  2. He ought to do B.
  3. He cannot do both A and B.
  4. (1) does not override (2) and (2) does not override (1).

This way of defining moral dilemmas distinguishes them from the kind of moral conflict, such as Ross’s promise-keeping/accident-prevention case, in which one of the duties is overridden by the other. Arguably, Sartre’s student faces a moral dilemma. Making sense of a situation in which neither of two duties overrides the other is easier if deliberative commensurability is denied. Whether moral dilemmas are possible will depend crucially on whether “ought” implies “can” and whether any pair of duties such as those comprised by (1) and (2) implies a single, “agglomerated” duty that the agent do both A and B. If either of these purported principles of the logic of duties is false, then moral dilemmas are possible.

The entry is here.

Tuesday, October 2, 2018

For the first time, researchers will release genetically engineered mosquitoes in Africa

Ike Swetlitz
Originally posted September 5, 2018

The government of Burkina Faso granted scientists permission to release genetically engineered mosquitoes anytime this year or next, researchers announced Wednesday. It’s a key step in the broader efforts to use bioengineering to eliminate malaria in the region.

The release, which scientists are hoping to execute this month, will be the first time that any genetically engineered animal is released into the wild in Africa. While these particular mosquitoes won’t have any mutations related to malaria transmission, researchers are hoping their release, and the work that led up to it, will help improve the perception of the research and trust in the science among regulators and locals alike. It will also inform future releases.

Teams in three African countries — Burkina Faso, Mali, and Uganda — are building the groundwork to eventually let loose “gene drive” mosquitoes, which would contain a mutation that would significantly and quickly reduce the mosquito population. Genetically engineered mosquitoes have already been released in places like Brazil and the Cayman Islands, though animals with gene drives have never been released in the wild.

The info is here.

Philosophy of Multicultures

Owen Flanagan
Philosophers Magazine
Originally published August 19, 2018

Here is an excerpt:

First, as I have been insisting, we live increasingly in multicultural, multiethnic, cosmopolitan worlds. Depending on one’s perspective these worlds are grand experiments in tolerant living, worlds in which prejudices break down; or they are fractured, wary, tense ethnic and religious cohousing projects; or they are melting pots where differences are thinned out and homogenised over time; or they are admixtures or collages of the best values, norms, and practices, the sociomoral equivalent of fine fusion cuisine or excellent world music that creates flavours or sounds from multiple fine sources; or on the other side, a blend of the worst of incommensurable value systems and practices, clunky and degenerate. It is good for ethicists to know more about people who are not from the North Atlantic (or its outposts). Or even if they are from the North Atlantic are not from elites or are not from “around here”. It matters how members of original displaced communities or people who were brought here or came here as chattel slaves or indentured workers or political refugees or for economic opportunity, have thought about virtues, values, moral psychology, normative ethics, and good human lives.

Second, most work in empirical moral psychology has been done on WEIRD people (Western Educated Industrialised Rich Democratic) and there is every reason to think WEIRD people are unrepresentative, possibly the most unrepresentative group imaginable, less representative than our ancestors when the ice melted at the end of the Pleistocene. It may be the assumptions we make about the nature of persons and the human good in the footnotes to Plato lineage and which seem secure are in fact parochial and worth re-examining.

Third, the methods of genetics, empirical psychology, evolutionary psychology, and neuroscience get lots of attention recently in moral psychology, as if they can ground an entirely secular and neutral form of common life. But it would be a mistake to think that these sciences are superior to the wisdom of the ages in gaining deep knowledge about human nature and the human good or that they are robust enough to provide a picture of a good life.

The info is here.