Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, April 23, 2019

4 Ways Lying Becomes the Norm at a Company

Ron Carucci
Harvard Business Review
Originally published February 15, 2019

Many of the corporate scandals in the past several years — think Volkswagen or Wells Fargo — have been cases of wide-scale dishonesty. It’s hard to fathom how lying and deceit permeated these organizations. Some researchers point to group decision-making processes or psychological traps that snare leaders into justification of unethical choices. Certainly those factors are at play, but they largely explain dishonest behavior at an individual level and I wondered about systemic factors that might influence whether or not people in organizations distort or withhold the truth from one another.

This is what my team set out to understand through a 15-year longitudinal study. We analyzed 3,200 interviews that were conducted as part of 210 organizational assessments to see whether there were factors that predicted whether or not people inside a company will be honest. Our research yielded four factors — not individual character traits, but organizational issues — that played a role. The good news is that these factors are completely within a corporation’s control and improving them can make your company more honest, and help avert the reputation and financial disasters that dishonesty can lead to.

The stakes here are high. Accenture’s Competitive Agility Index — a 7,000-company, 20-industry analysis, for the first time tangibly quantified how a decline in stakeholder trust impacts a company’s financial performance. The analysis reveals more than half (54%) of companies on the index experienced a material drop in trust — from incidents such as product recalls, fraud, data breaches and c-suite missteps — which equates to a minimum of $180 billion in missed revenues. Worse, following a drop in trust, a company’s index score drops 2 points on average, negatively impacting revenue growth by 6% and EBITDA by 10% on average.

The info is here.

How big tech designs its own rules of ethics to avoid scrutiny and accountability

David Watts
theconversaton.com
Originally posted March 28, 2019

Here is an excerpt:

“Applied ethics” aims to bring the principles of ethics to bear on real-life situations. There are numerous examples.

Public sector ethics are governed by law. There are consequences for those who breach them, including disciplinary measures, termination of employment and sometimes criminal penalties. To become a lawyer, I had to provide evidence to a court that I am a “fit and proper person”. To continue to practice, I’m required to comply with detailed requirements set out in the Australian Solicitors Conduct Rules. If I breach them, there are consequences.

The features of applied ethics are that they are specific, there are feedback loops, guidance is available, they are embedded in organisational and professional culture, there is proper oversight, there are consequences when they are breached and there are independent enforcement mechanisms and real remedies. They are part of a regulatory apparatus and not just “feel good” statements.

Feelgood, high-level data ethics principles are not fit for the purpose of regulating big tech. Applied ethics may have a role to play but because they are occupation or discipline specific they cannot be relied on to do all, or even most of, the heavy lifting.

The info is here.

Monday, April 22, 2019

Moral identity relates to the neural processing of third-party moral behavior

Carolina Pletti, Jean Decety, & Markus Paulus
Social Cognitive and Affective Neuroscience
https://doi.org/10.1093/scan/nsz016

Abstract

Moral identity, or moral self, is the degree to which being moral is important to a person’s self-concept. It is hypothesized to be the “missing link” between moral judgment and moral action. However, its cognitive and psychophysiological mechanisms are still subject to debate. In this study, we used Event-Related Potentials (ERPs) to examine whether the moral self concept is related to how people process prosocial and antisocial actions. To this end, participants’ implicit and explicit moral self-concept was assessed. We examined whether individual differences in moral identity relate to differences in early, automatic processes (i.e. EPN, N2) or late, cognitively controlled processes (i.e. LPP) while observing prosocial and antisocial situations. Results show that a higher implicit moral self was related to a lower EPN amplitude for prosocial scenarios. In addition, an enhanced explicit moral self was related to a lower N2 amplitude for prosocial scenarios. The findings demonstrate that the moral self affects the neural processing of morally relevant stimuli during third-party evaluations. They support theoretical considerations that the moral self already affects (early) processing of moral information.

Here is the conclusion:

Taken together, notwithstanding some limitations, this study provides novel insights into the
nature of the moral self. Importantly, the results suggest that the moral self concept influences the
early processing of morally relevant contexts. Moreover, the implicit and the explicit moral self
concepts have different neural correlates, influencing respectively early and intermediate processing
stages. Overall, the findings inform theoretical approaches on how the moral self informs social
information processing (Lapsley & Narvaez, 2004).

Psychiatry’s Incurable Hubris

Gary Greenberg
The Atlantic
April 2019 issue

Here is an excerpt:

The need to dispel widespread public doubt haunts another debacle that Harrington chronicles: the rise of the “chemical imbalance” theory of mental illness, especially depression. The idea was first advanced in the early 1950s, after scientists demonstrated the principles of chemical neurotransmission; it was supported by the discovery that consciousness-altering drugs such as LSD targeted serotonin and other neurotransmitters. The idea exploded into public view in the 1990s with the advent of direct-to-consumer advertising of prescription drugs, antidepressants in particular. Harrington documents ad campaigns for Prozac and Zoloft that assured wary customers the new medications were not simply treating patients’ symptoms by altering their consciousness, as recreational drugs might. Instead, the medications were billed as repairing an underlying biological problem.

The strategy worked brilliantly in the marketplace. But there was a catch. “Ironically, just as the public was embracing the ‘serotonin imbalance’ theory of depression,” Harrington writes, “researchers were forming a new consensus” about the idea behind that theory: It was “deeply flawed and probably outright wrong.” Stymied, drug companies have for now abandoned attempts to find new treatments for mental illness, continuing to peddle the old ones with the same claims. And the news has yet to reach, or at any rate affect, consumers. At last count, more than 12 percent of Americans ages 12 and older were taking antidepressants. The chemical-imbalance theory, like the revamped DSM, may fail as science, but as rhetoric it has turned out to be a wild success.

The info is here.

Sunday, April 21, 2019

Microsoft will be adding AI ethics to its standard checklist for product release

Alan Boyle
www.geekwire.com
Originally posted March 25, 2019

Microsoft will “one day very soon” add an ethics review focusing on artificial-intelligence issues to its standard checklist of audits that precede the release of new products, according to Harry Shum, a top executive leading the company’s AI efforts.

AI ethics will join privacy, security and accessibility on the list, Shum said today at MIT Technology Review’s EmTech Digital conference in San Francisco.

Shum, who is executive vice president of Microsoft’s AI and Research group, said companies involved in AI development “need to engineer responsibility into the very fabric of the technology.”

Among the ethical concerns are the potential for AI agents to pick up all-too-human biases from the data on which they’re trained, to piece together privacy-invading insights through deep data analysis, to go horribly awry due to gaps in their perceptual capabilities, or simply to be given too much power by human handlers.

Shum noted that as AI becomes better at analyzing emotions, conversations and writings, the technology could open the way to increased propaganda and misinformation, as well as deeper intrusions into personal privacy.

The info is here.

Saturday, April 20, 2019

Cardiologist Eric Topol on How AI Can Bring Humanity Back to Medicine

Alice Park
Time.com
Originally published March 14, 2019

Here is an excerpt:

What are the best examples of how AI can work in medicine?

We’re seeing rapid uptake of algorithms that make radiologists more accurate. The other group already deriving benefit is ophthalmologists. Diabetic retinopathy, which is a terribly underdiagnosed cause of blindness and a complication of diabetes, is now diagnosed by a machine with an algorithm that is approved by the Food and Drug Administration. And we’re seeing it hit at the consumer level with a smart-watch app with a deep learning algorithm to detect atrial fibrillation.

Is that really artificial intelligence, in the sense that the machine has learned about medicine like doctors?

Artificial intelligence is different from human intelligence. It’s really about using machines with software and algorithms to ingest data and come up with the answer, whether that data is what someone says in speech, or reading patterns and classifying or triaging things.

What worries you the most about AI in medicine?

I have lots of worries. First, there’s the issue of privacy and security of the data. And I’m worried about whether the AI algorithms are always proved out with real patients. Finally, I’m worried about how AI might worsen some inequities. Algorithms are not biased, but the data we put into those algorithms, because they are chosen by humans, often are. But I don’t think these are insoluble problems.

The info is here.

Friday, April 19, 2019

Leader's group-norm violations elicit intentions to leave the group – If the group-norm is not affirmed

Lara Ditrich, AdrianLüders, Eva Jonas, & Kai Sassenberg
Journal of Experimental Social Psychology
Available online 2 April 2019

Abstract

Group members, even central ones like group leaders, do not always adhere to their group's norms and show norm-violating behavior instead. Observers of this kind of behavior have been shown to react negatively in such situations, and in extreme cases, may even leave their group. The current work set out to test how this reaction might be prevented. We assumed that group-norm affirmations can buffer leaving intentions in response to group-norm violations and tested three potential mechanisms underlying the buffering effect of group-norm affirmations. To this end, we conducted three experiments in which we manipulated group-norm violations and group-norm affirmations. In Study 1, we found group-norm affirmations to buffer leaving intentions after group-norm violations. However, we did not find support for the assumption that group-norm affirmations change how a behavior is evaluated or preserve group members' identification with their group. Thus, neither of these variables can explain the buffering effect of group-norm affirmations. Studies 2 & 3 revealed that group-norm affirmations instead reduce perceived effectiveness of the norm-violator, which in turn predicted lower leaving intentions. The present findings will be discussed based on previous research investigating the consequences of norm violations.

The research is here.

Duke agrees to pay $112.5 million to settle allegation it fraudulently obtained federal research funding

Seth Thomas Gulledge
Triangle Business Journal
Originally posted March 25, 2019

Duke University has agreed to pay $112.5 million to settle a suit with the federal government over allegations the university submitted false research reports to receive federal research dollars.

This week, the university reached a settlement over allegations brought forward by whistleblower Joseph Thomas – a former Duke employee – who alleged that during his time working as a lab research analyst in the pulmonary, asthma and critical care division of Duke University Health Systems, the clinical research coordinator, Erin Potts-Kant, manipulated and falsified studies to receive grant funding.

The case also contends that the university and its office of research support, upon discovering the fraud, knowingly concealed it from the government.

According to court documents, Duke was accused of submitting claims to the National Institute of Health (NIH) and Environmental Protection Agency (EPA) between 2006-2018 that contained "false or fabricated data" cause the two agencies to pay out grant funds they "otherwise would not have." Those fraudulent submissions, the case claims, netted the university nearly $200 million in federal research funding.

“Taxpayers expect and deserve that federal grant dollars will be used efficiently and honestly. Individuals and institutions that receive research funding from the federal government must be scrupulous in conducting research for the common good and rigorous in rooting out fraud,” said Matthew Martin, U.S. attorney for the Middle District of North Carolina in a statement announcing the settlement. “May this serve as a lesson that the use of false or fabricated data in grant applications or reports is completely unacceptable.”

The info is here.

Thursday, April 18, 2019

Google cancels AI ethics board in response to outcry

Kelsey Piper
www.Vox.com
Originally published April 4, 2019

his week, Vox and other outlets reported that Google’s newly created AI ethics board was falling apart amid controversy over several of the board members.

Well, it’s officially done falling apart — it’s been canceled. Google told Vox on Thursday that it’s pulling the plug on the ethics board.

The board survived for barely more than one week. Founded to guide “responsible development of AI” at Google, it would have had eight members and met four times over the course of 2019 to consider concerns about Google’s AI program. Those concerns include how AI can enable authoritarian states, how AI algorithms produce disparate outcomes, whether to work on military applications of AI, and more. But it ran into problems from the start.

Thousands of Google employees signed a petition calling for the removal of one board member, Heritage Foundation president Kay Coles James, over her comments about trans people and her organization’s skepticism of climate change. Meanwhile, the inclusion of drone company CEO Dyan Gibbens reopened old divisions in the company over the use of the company’s AI for military applications.

The info is here.