Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, April 30, 2019

Ethics in AI Are Not Optional

Rob Daly
www.marketsmedia.com
Originally posted April 12, 2019

Artificial intelligence is a critical feature in the future of the financial services, but firms should not be penny-wise and pound-foolish in their race to develop the most advanced offering as possible, caution experts.

“You do not need to be on the frontier of technology if you are not a technology company,” said Greg Baxter, the chief digital officer at MetLife, in his keynote address during Celent’s annual Innovation and Insight Day. “You just have to permit your people to use the technology.”

More effort should be spent on developing the various policies that will govern the deployment of the technology, he added.

MetLife spends more time on ethics and legal than it does with technology, according to Baxter.

Firms should be wary when implementing AI in such a fashion that it alienates clients by being too intrusive and ruining the customer experience. “If data is the new currency, its credit line is trust,” said Baxter.

The info is here.

Should animals, plants, and robots have the same rights as you?

Sigal Samuel
www.vox.com
Originally posted April 4, 2019

Here is an excerpt:

The moral circle is a fundamental concept among philosophers, psychologists, activists, and others who think seriously about what motivates people to do good. It was introduced by historian William Lecky in the 1860s and popularized by philosopher Peter Singer in the 1980s.

Now it’s cropping up more often in activist circles as new social movements use it to make the case for granting rights to more and more entities. Animals. Nature. Robots. Should they all get rights similar to the ones you enjoy? For example, you have the right not to be unjustly imprisoned (liberty) and the right not to be experimented on (bodily integrity). Maybe animals should too.

If you’re tempted to dismiss that notion as absurd, ask yourself: How do you decide whether an entity deserves rights?

Many people think that sentience, the ability to feel sensations like pain and pleasure, is the deciding factor. If that’s the case, what degree of sentience is required to make the cut? Maybe you think we should secure legal rights for chimpanzees and elephants — as the Nonhuman Rights Project is aiming to do — but not for, say, shrimp.

Some people think sentience is the wrong litmus test; they argue we should include anything that’s alive or that supports living things. Maybe you think we should secure rights for natural ecosystems, as the Community Environmental Legal Defense Fund is doing. Lake Erie won legal personhood status in February, and recent years have seen rights granted to rivers and forests in New Zealand, India, and Colombia.

The info is here.

Monday, April 29, 2019

How Trump has changed white evangelicals’ views about morality

David Campbell and Geoffrey Layman
The Washington Post
Originally published April 25, 2019

Recently, Democratic presidential candidate Pete Buttigieg has been criticizing religious conservatives — especially Vice President Pence — for supporting President Trump, despite his lewd behavior. To drive home the point, Buttigieg often refers to Trump as the “porn star president.”

We were curious about the attitudes of rank-and-file evangelicals. After more than two years of Trump in the White House, how do they feel about a president’s private morality?

From 2011 to 2016, white evangelicals dramatically changed their minds about the importance of politicians’ private behavior

Back in 2016, many journalists and commentators pointed out a stunning change in how white evangelicals perceived the connection between private and public morality. In 2011, a poll conducted by the Public Religion Research Institute (PRRI) and the Religion News Service found that 60 percent of white evangelicals believed that a public official who “commits an immoral act in their personal life” cannot still “behave ethically and fulfill their duties in their public and professional life.” But in an October 2016 poll by PRRI and the Brookings Institution — after the release of the infamous “Access Hollywood” tape — only 20 percent of evangelicals, answering the same question, said that private immorality meant someone could not behave ethically in public.



The info is here.

Nova Scotia to become 1st in North America with presumed consent for organ donation

Michael Gorman
www.cbc.com
Originally posted April 2, 2019

Here is an excerpt:

Premier Stephen McNeil said the bill fills a need within the province, noting Nova Scotia has some of the highest per capita rates of willing donors in the country.

"That doesn't always translate into the actual act of giving," he said.

"We know that there are many ways that we can continue to improve the system that we have."

McNeil pledged to put the necessary services in place to allow the province's donor program to live up to the promise of the legislation.

"We know that in many parts of our province — including the one I live in, which is a rural part of Nova Scotia — we have work to do," he said.

"I will make sure that the work that is required to build the system and supports around this will happen."

The bill will not be proclaimed right away.

Health Minister Randy Delorey said government officials would spend 12-18 months educating the public about the change and working on getting health-care workers the support they need to enhance the program.

Even with the change, Delorey said, people should continue making their wishes known to loved ones, so there can be no question about intentions.

The info is here.

Sunday, April 28, 2019

No Support for Historical Candidate Gene or Candidate Gene-by-Interaction Hypotheses for Major Depression Across Multiple Large Samples

Richard Border, Emma C. Johnson, and others
The American Journal of Psychiatry
https://doi.org/10.1176/appi.ajp.2018.18070881

Abstract

Objective:
Interest in candidate gene and candidate gene-by-environment interaction hypotheses regarding major depressive disorder remains strong despite controversy surrounding the validity of previous findings. In response to this controversy, the present investigation empirically identified 18 candidate genes for depression that have been studied 10 or more times and examined evidence for their relevance to depression phenotypes.

Methods:
Utilizing data from large population-based and case-control samples (Ns ranging from 62,138 to 443,264 across subsamples), the authors conducted a series of preregistered analyses examining candidate gene polymorphism main effects, polymorphism-by-environment interactions, and gene-level effects across a number of operational definitions of depression (e.g., lifetime diagnosis, current severity, episode recurrence) and environmental moderators (e.g., sexual or physical abuse during childhood, socioeconomic adversity).

Results:
No clear evidence was found for any candidate gene polymorphism associations with depression phenotypes or any polymorphism-by-environment moderator effects. As a set, depression candidate genes were no more associated with depression phenotypes than noncandidate genes. The authors demonstrate that phenotypic measurement error is unlikely to account for these null findings.

Conclusions:
The study results do not support previous depression candidate gene findings, in which large genetic effects are frequently reported in samples orders of magnitude smaller than those examined here. Instead, the results suggest that early hypotheses about depression candidate genes were incorrect and that the large number of associations reported in the depression candidate gene literature are likely to be false positives.

The research is here.

Editor's note: Depression is a complex, multivariate experience that is not primarily genetic in its origins.

Saturday, April 27, 2019

When Would a Robot Have Free Will?

Eddy Nahmias
The NeuroEthics Blog
Originally posted April 1, 2019

Here are two excerpts:

Joshua Shepherd (2015) had found evidence that people judge humanoid robots that behave like humans and are described as conscious to be free and responsible more than robots that carry out these behaviors without consciousness. We wanted to explore what sorts of consciousness influence attributions of free will and moral responsibility—i.e., deserving praise and blame for one’s actions. We developed several scenarios describing futuristic humanoid robots or aliens, in which they were described as either having or as lacking: conscious sensations, conscious emotions, and language and intelligence. We found that people’s attributions of free will generally track their attributions of conscious emotions more than attributions of conscious sensory experiences or intelligence and language. Consistent with this, we also found that people are more willing to attribute free will to aliens than robots, and in more recent studies, we see that people also attribute free will to many animals, with dolphins and dogs near the levels attributed to human adults.

These results suggest two interesting implications. First, when philosophers analyze free will in terms of the control required to be morally responsible—e.g., being ‘reasons-responsive’—they may be creating a term of art (perhaps a useful one). Laypersons seem to distinguish the capacity to have free will from the capacities required to be responsible. Our studies suggest that people may be willing to hold intelligent but non-conscious robots or aliens responsible even when they are less willing to attribute to them free will.

(cut)

A second interesting implication of our results is that many people seem to think that having a biological body and conscious feelings and emotions are important for having free will. The question is: why? Philosophers and scientists have often asserted that consciousness is required for free will, but most have been vague about what the relationship is. One plausible possibility we are exploring is that people think that what matters for an agent to have free will is that things can really matter to the agent. And for anything to matter to an agent, she has to be able to care—that is, she has to have foundational, intrinsic motivations that ground and guide her other motivations and decisions.

The info is here.

Friday, April 26, 2019

EU beats Google to the punch in setting strategy for ethical A.I.

Elizabeth Schulze
www.CNBC.com
Originally posted April 8, 2019

Less than one week after Google scrapped its AI ethics council, the European Union has set out its own guidelines for achieving “trustworthy” artificial intelligence.

On Monday, the European Commission released a set of steps to maintain ethics in artificial intelligence, as companies and governments weigh both the benefits and risks of the far-reaching technology.

“The ethical dimension of AI is not a luxury feature or an add-on,” said Andrus Ansip, EU vice-president for the digital single market, in a press release Monday. “It is only with trust that our society can fully benefit from technologies.”

The EU defines artificial intelligence as systems that show “intelligent behavior,” allowing them to analyze their environment and perform tasks with some degree of autonomy. AI is already transforming businesses in a variety of functions, like automating repetitive tasks and analyzing troves of data. But the technology raises a series of ethical questions, such as how to ensure algorithms are programmed without bias and how to hold AI accountable if something goes wrong.

The info is here.

Social media giants no longer can avoid moral compass

Don Hepburn
thehill.com
Originally published April 1, 2019

Here is an excerpt:

There are genuine moral, legal and technical dilemmas in addressing the challenges raised by the ubiquitous nature of the not-so-new social media conglomerates. Why, then, are social media giants avoiding the moral compass, evading legal guidelines and ignoring technical solutions available to them? The answer is, their corporate culture refuses to be held accountable to the same standards the public has applied to all other global corporations for the past five decades.

A wholesale change of culture and leadership is required within the social media industry. The culture of “everything goes” because “we are the future” needs to be more than tweaked; it must come to an end. Like any large conglomerate, social media platforms cannot ignore the public’s demand that they act with some semblance of responsibility. Just like the early stages of the U.S. coal, oil and chemical industries, the social media industry is impacting not only our physical environment but the social good and public safety. No serious journalism organization would ever allow a stranger to write their own hate-filled stories (with photos) for their newspaper’s daily headline — that’s why there’s a position called editor-in-chief.

If social media giants insist they are open platforms, then anyone can purposefully exploit them for good or evil. But if social media platforms demonstrate no moral or ethical standards, they should be subject to some form of government regulation. We have regulatory environments where we see the need to protect the public good against the need for profit-driven enterprises; why should social media platforms be given preferential treatment?

The info is here.

Thursday, April 25, 2019

The New Science of How to Argue—Constructively

Jesse Singal
The Atlantic
Originally published April 7, 2019

Here is an excerpt:

Once you know a term like decoupling, you can identify instances in which a disagreement isn’t really about X anymore, but about Y and Z. When some readers first raised doubts about a now-discredited Rolling Stone story describing a horrific gang rape at the University of Virginia, they noted inconsistencies in the narrative. Others insisted that such commentary fit into destructive tropes about women fabricating rape claims, and therefore should be rejected on its face. The two sides weren’t really talking; one was debating whether the story was a hoax, while the other was responding to the broader issue of whether rape allegations are taken seriously. Likewise, when scientists bring forth solid evidence that sexual orientation is innate, or close to it, conservatives have lashed out against findings that would “normalize” homosexuality. But the dispute over which sexual acts, if any, society should discourage is totally separate from the question of whether sexual orientation is, in fact, inborn. Because of a failure to decouple, people respond indignantly to factual claims when they’re actually upset about how those claims might be interpreted.

Nerst believes that the world can be divided roughly into “high decouplers,” for whom decoupling comes easy, and “low decouplers,” for whom it does not. This is the sort of area where erisology could produce empirical insights: What characterizes people’s ability to decouple? Nerst believes that hard-science types are better at it, on average, while artistic types are worse. After all, part of being an artist is seeing connections where other people don’t—so maybe it’s harder for them to not see connections in some cases. Nerst might be wrong. Either way, it’s the sort of claim that could be fairly easily tested if the discipline caught on.

The info is here.

The Brave New World of Sex Robots

Mark Wolverton
undark.org
Originally posted March 29, 2019

Here is an excerpt:

But as the technology develops apace, so are a host of other issues, including political and social ones (Why such emphasis on feminine bots rather than male? Do sexbots really need a “gender” at all?); philosophical and ethical ones (Is sex with a robot really “sex”? What if the robots are sentient?); and legal ones (Does sex with a robot count as cheating on your human partner?)

Many of these concerns overlap with present controversies regarding AI in general, but in this realm, tied so closely with the most profound manifestations of human intimacy, they feel more personal and controversial. Perhaps as a result, Devlin has a self-admitted tendency at times to slip into somewhat heavy-handed feminist polemics, which can overshadow or obscure possible alternative interpretations to some questions — it’s arguable whether the “Blade Runner” films have “a woman problem,” for example, or whether the prevalence of sexbots with idealized and identifiably feminine aesthetics is solely a result of “male objectification.”

Informed by her background as a computer scientist, Devlin provides excellent nuts-and-bolts technical explanations of the fundamentals of machine learning, neural networks, and language processing that provide the necessary foundation for her explorations of the subject, whose sometimes sensitive nature is eased by her sly sense of humor.

The info is here.

Wednesday, April 24, 2019

134 Activities to Add to Your Self-Care Plan

GoodTherapy.org Staff
www.goodtherapy.org
Originally posted June 13, 2015

At its most basic definition, self-care is any intentional action taken to meet an individual’s physical, mental, spiritual, or emotional needs. In short, it’s all the little ways we take care of ourselves to avoid a breakdown in those respective areas of health.

You may find that, at certain points, the world and the people in it place greater demands on your time, energy, and emotions than you might feel able to handle. This is precisely why self-care is so important. It is the routine maintenance you need do to function your best not only for others, but also for yourself.

GoodTherapy.org’s own business and administrative, web development, outreach and advertising, editorial and education, and support teams have compiled a massive list of some of their own personal self-care activities to offer some help for those struggling to come up with their own maintenance plan. Next time you find yourself saying, “I really need to do something for myself,” browse our list and pick something that speaks to you. Be silly, be caring to others, and make your self-care a priority! In most cases, taking care of yourself doesn’t even have to cost anything. And because self-care is as unique as the individual performing it, we’d love to invite you to comment and add any of your own personal self-care activities in the comments section below. Give back to your fellow readers and share some of the little ways you take care of yourself.

The list is here.

Note: Self-care enhances the possibility of competence practice.  Good self-care skills are important to promote ethical practice.

The Growing Marketplace For AI Ethics

Forbes Insight Team
Forbes.com
Originally posted March 27, 2019

As companies have raced to adopt artificial intelligence (AI) systems at scale, they have also sped through, and sometimes spun out, in the ethical obstacle course AI often presents.

AI-powered loan and credit approval processes have been marred by unforeseen bias. Same with recruiting tools. Smart speakers have secretly turned on and recorded thousands of minutes of audio of their owners.

Unfortunately, there’s no industry-standard, best-practices handbook on AI ethics for companies to follow—at least not yet. Some large companies, including Microsoft and Google, are developing their own internal ethical frameworks.

A number of think tanks, research organizations, and advocacy groups, meanwhile, have been developing a wide variety of ethical frameworks and guidelines for AI. Below is a brief roundup of some of the more influential models to emerge—from the Asilomar Principles to best-practice recommendations from the AI Now Institute.

“Companies need to study these ethical frameworks because this is no longer a technology question. It’s an existential human one,” says Hanson Hosein, director of the Communication Leadership program at the University of Washington. “These questions must be answered hand-in-hand with whatever’s being asked about how we develop the technology itself.”

The info is here.

Tuesday, April 23, 2019

4 Ways Lying Becomes the Norm at a Company

Ron Carucci
Harvard Business Review
Originally published February 15, 2019

Many of the corporate scandals in the past several years — think Volkswagen or Wells Fargo — have been cases of wide-scale dishonesty. It’s hard to fathom how lying and deceit permeated these organizations. Some researchers point to group decision-making processes or psychological traps that snare leaders into justification of unethical choices. Certainly those factors are at play, but they largely explain dishonest behavior at an individual level and I wondered about systemic factors that might influence whether or not people in organizations distort or withhold the truth from one another.

This is what my team set out to understand through a 15-year longitudinal study. We analyzed 3,200 interviews that were conducted as part of 210 organizational assessments to see whether there were factors that predicted whether or not people inside a company will be honest. Our research yielded four factors — not individual character traits, but organizational issues — that played a role. The good news is that these factors are completely within a corporation’s control and improving them can make your company more honest, and help avert the reputation and financial disasters that dishonesty can lead to.

The stakes here are high. Accenture’s Competitive Agility Index — a 7,000-company, 20-industry analysis, for the first time tangibly quantified how a decline in stakeholder trust impacts a company’s financial performance. The analysis reveals more than half (54%) of companies on the index experienced a material drop in trust — from incidents such as product recalls, fraud, data breaches and c-suite missteps — which equates to a minimum of $180 billion in missed revenues. Worse, following a drop in trust, a company’s index score drops 2 points on average, negatively impacting revenue growth by 6% and EBITDA by 10% on average.

The info is here.

How big tech designs its own rules of ethics to avoid scrutiny and accountability

David Watts
theconversaton.com
Originally posted March 28, 2019

Here is an excerpt:

“Applied ethics” aims to bring the principles of ethics to bear on real-life situations. There are numerous examples.

Public sector ethics are governed by law. There are consequences for those who breach them, including disciplinary measures, termination of employment and sometimes criminal penalties. To become a lawyer, I had to provide evidence to a court that I am a “fit and proper person”. To continue to practice, I’m required to comply with detailed requirements set out in the Australian Solicitors Conduct Rules. If I breach them, there are consequences.

The features of applied ethics are that they are specific, there are feedback loops, guidance is available, they are embedded in organisational and professional culture, there is proper oversight, there are consequences when they are breached and there are independent enforcement mechanisms and real remedies. They are part of a regulatory apparatus and not just “feel good” statements.

Feelgood, high-level data ethics principles are not fit for the purpose of regulating big tech. Applied ethics may have a role to play but because they are occupation or discipline specific they cannot be relied on to do all, or even most of, the heavy lifting.

The info is here.

Monday, April 22, 2019

Moral identity relates to the neural processing of third-party moral behavior

Carolina Pletti, Jean Decety, & Markus Paulus
Social Cognitive and Affective Neuroscience
https://doi.org/10.1093/scan/nsz016

Abstract

Moral identity, or moral self, is the degree to which being moral is important to a person’s self-concept. It is hypothesized to be the “missing link” between moral judgment and moral action. However, its cognitive and psychophysiological mechanisms are still subject to debate. In this study, we used Event-Related Potentials (ERPs) to examine whether the moral self concept is related to how people process prosocial and antisocial actions. To this end, participants’ implicit and explicit moral self-concept was assessed. We examined whether individual differences in moral identity relate to differences in early, automatic processes (i.e. EPN, N2) or late, cognitively controlled processes (i.e. LPP) while observing prosocial and antisocial situations. Results show that a higher implicit moral self was related to a lower EPN amplitude for prosocial scenarios. In addition, an enhanced explicit moral self was related to a lower N2 amplitude for prosocial scenarios. The findings demonstrate that the moral self affects the neural processing of morally relevant stimuli during third-party evaluations. They support theoretical considerations that the moral self already affects (early) processing of moral information.

Here is the conclusion:

Taken together, notwithstanding some limitations, this study provides novel insights into the
nature of the moral self. Importantly, the results suggest that the moral self concept influences the
early processing of morally relevant contexts. Moreover, the implicit and the explicit moral self
concepts have different neural correlates, influencing respectively early and intermediate processing
stages. Overall, the findings inform theoretical approaches on how the moral self informs social
information processing (Lapsley & Narvaez, 2004).

Psychiatry’s Incurable Hubris

Gary Greenberg
The Atlantic
April 2019 issue

Here is an excerpt:

The need to dispel widespread public doubt haunts another debacle that Harrington chronicles: the rise of the “chemical imbalance” theory of mental illness, especially depression. The idea was first advanced in the early 1950s, after scientists demonstrated the principles of chemical neurotransmission; it was supported by the discovery that consciousness-altering drugs such as LSD targeted serotonin and other neurotransmitters. The idea exploded into public view in the 1990s with the advent of direct-to-consumer advertising of prescription drugs, antidepressants in particular. Harrington documents ad campaigns for Prozac and Zoloft that assured wary customers the new medications were not simply treating patients’ symptoms by altering their consciousness, as recreational drugs might. Instead, the medications were billed as repairing an underlying biological problem.

The strategy worked brilliantly in the marketplace. But there was a catch. “Ironically, just as the public was embracing the ‘serotonin imbalance’ theory of depression,” Harrington writes, “researchers were forming a new consensus” about the idea behind that theory: It was “deeply flawed and probably outright wrong.” Stymied, drug companies have for now abandoned attempts to find new treatments for mental illness, continuing to peddle the old ones with the same claims. And the news has yet to reach, or at any rate affect, consumers. At last count, more than 12 percent of Americans ages 12 and older were taking antidepressants. The chemical-imbalance theory, like the revamped DSM, may fail as science, but as rhetoric it has turned out to be a wild success.

The info is here.

Sunday, April 21, 2019

Microsoft will be adding AI ethics to its standard checklist for product release

Alan Boyle
www.geekwire.com
Originally posted March 25, 2019

Microsoft will “one day very soon” add an ethics review focusing on artificial-intelligence issues to its standard checklist of audits that precede the release of new products, according to Harry Shum, a top executive leading the company’s AI efforts.

AI ethics will join privacy, security and accessibility on the list, Shum said today at MIT Technology Review’s EmTech Digital conference in San Francisco.

Shum, who is executive vice president of Microsoft’s AI and Research group, said companies involved in AI development “need to engineer responsibility into the very fabric of the technology.”

Among the ethical concerns are the potential for AI agents to pick up all-too-human biases from the data on which they’re trained, to piece together privacy-invading insights through deep data analysis, to go horribly awry due to gaps in their perceptual capabilities, or simply to be given too much power by human handlers.

Shum noted that as AI becomes better at analyzing emotions, conversations and writings, the technology could open the way to increased propaganda and misinformation, as well as deeper intrusions into personal privacy.

The info is here.

Saturday, April 20, 2019

Cardiologist Eric Topol on How AI Can Bring Humanity Back to Medicine

Alice Park
Time.com
Originally published March 14, 2019

Here is an excerpt:

What are the best examples of how AI can work in medicine?

We’re seeing rapid uptake of algorithms that make radiologists more accurate. The other group already deriving benefit is ophthalmologists. Diabetic retinopathy, which is a terribly underdiagnosed cause of blindness and a complication of diabetes, is now diagnosed by a machine with an algorithm that is approved by the Food and Drug Administration. And we’re seeing it hit at the consumer level with a smart-watch app with a deep learning algorithm to detect atrial fibrillation.

Is that really artificial intelligence, in the sense that the machine has learned about medicine like doctors?

Artificial intelligence is different from human intelligence. It’s really about using machines with software and algorithms to ingest data and come up with the answer, whether that data is what someone says in speech, or reading patterns and classifying or triaging things.

What worries you the most about AI in medicine?

I have lots of worries. First, there’s the issue of privacy and security of the data. And I’m worried about whether the AI algorithms are always proved out with real patients. Finally, I’m worried about how AI might worsen some inequities. Algorithms are not biased, but the data we put into those algorithms, because they are chosen by humans, often are. But I don’t think these are insoluble problems.

The info is here.

Friday, April 19, 2019

Leader's group-norm violations elicit intentions to leave the group – If the group-norm is not affirmed

Lara Ditrich, AdrianLüders, Eva Jonas, & Kai Sassenberg
Journal of Experimental Social Psychology
Available online 2 April 2019

Abstract

Group members, even central ones like group leaders, do not always adhere to their group's norms and show norm-violating behavior instead. Observers of this kind of behavior have been shown to react negatively in such situations, and in extreme cases, may even leave their group. The current work set out to test how this reaction might be prevented. We assumed that group-norm affirmations can buffer leaving intentions in response to group-norm violations and tested three potential mechanisms underlying the buffering effect of group-norm affirmations. To this end, we conducted three experiments in which we manipulated group-norm violations and group-norm affirmations. In Study 1, we found group-norm affirmations to buffer leaving intentions after group-norm violations. However, we did not find support for the assumption that group-norm affirmations change how a behavior is evaluated or preserve group members' identification with their group. Thus, neither of these variables can explain the buffering effect of group-norm affirmations. Studies 2 & 3 revealed that group-norm affirmations instead reduce perceived effectiveness of the norm-violator, which in turn predicted lower leaving intentions. The present findings will be discussed based on previous research investigating the consequences of norm violations.

The research is here.

Duke agrees to pay $112.5 million to settle allegation it fraudulently obtained federal research funding

Seth Thomas Gulledge
Triangle Business Journal
Originally posted March 25, 2019

Duke University has agreed to pay $112.5 million to settle a suit with the federal government over allegations the university submitted false research reports to receive federal research dollars.

This week, the university reached a settlement over allegations brought forward by whistleblower Joseph Thomas – a former Duke employee – who alleged that during his time working as a lab research analyst in the pulmonary, asthma and critical care division of Duke University Health Systems, the clinical research coordinator, Erin Potts-Kant, manipulated and falsified studies to receive grant funding.

The case also contends that the university and its office of research support, upon discovering the fraud, knowingly concealed it from the government.

According to court documents, Duke was accused of submitting claims to the National Institute of Health (NIH) and Environmental Protection Agency (EPA) between 2006-2018 that contained "false or fabricated data" cause the two agencies to pay out grant funds they "otherwise would not have." Those fraudulent submissions, the case claims, netted the university nearly $200 million in federal research funding.

“Taxpayers expect and deserve that federal grant dollars will be used efficiently and honestly. Individuals and institutions that receive research funding from the federal government must be scrupulous in conducting research for the common good and rigorous in rooting out fraud,” said Matthew Martin, U.S. attorney for the Middle District of North Carolina in a statement announcing the settlement. “May this serve as a lesson that the use of false or fabricated data in grant applications or reports is completely unacceptable.”

The info is here.

Thursday, April 18, 2019

Google cancels AI ethics board in response to outcry

Kelsey Piper
www.Vox.com
Originally published April 4, 2019

his week, Vox and other outlets reported that Google’s newly created AI ethics board was falling apart amid controversy over several of the board members.

Well, it’s officially done falling apart — it’s been canceled. Google told Vox on Thursday that it’s pulling the plug on the ethics board.

The board survived for barely more than one week. Founded to guide “responsible development of AI” at Google, it would have had eight members and met four times over the course of 2019 to consider concerns about Google’s AI program. Those concerns include how AI can enable authoritarian states, how AI algorithms produce disparate outcomes, whether to work on military applications of AI, and more. But it ran into problems from the start.

Thousands of Google employees signed a petition calling for the removal of one board member, Heritage Foundation president Kay Coles James, over her comments about trans people and her organization’s skepticism of climate change. Meanwhile, the inclusion of drone company CEO Dyan Gibbens reopened old divisions in the company over the use of the company’s AI for military applications.

The info is here.

Why are smarter individuals more prosocial? A study on the mediating roles of empathy and moral identity

Qingke Guoa, Peng Suna, Minghang Caia, Xiling Zhang, & Kexin Song
Intelligence
Volume 75, July–August 2019, Pages 1-8

Abstract

The purpose of this study is to examine whether there is an association between intelligence and prosocial behavior (PSB), and whether this association is mediated by empathy and moral identity. Chinese version of the Raven's Standard Progressive Matrices, the Self-Report Altruism Scale Distinguished by the Recipient, Interpersonal Reactivity Index, and the Internalization subscale of the Self-Importance of Moral Identity Scale were administered to 518 (N female = 254, M age = 19.79) undergraduate students. The results showed that fluid intelligence was significantly correlated with self-reported PSB; moral identity, perspective taking, and empathic concern could account for the positive association between intelligence and PSB; the mediation effects of moral identity and empathy were consistent across gender.

The article is here.

Here is part of the Discussion:

This is consistent with previous findings that highly intelligent individuals are more likely to engage in prosocial and civic activities (Aranda & Siyaranamual, 2014; Bekkers & Wiepking, 2011; Wiepking & Maas, 2009). One explanation of the intelligence-prosocial association is that highly intelligent individuals are better able to perceive and understand the desires and feelings of the person in need, and are quicker in making proper decisions and figuring out which behaviors should be enacted (Eisenberg et al., 2015; Gottfredson, 1997). Another explanation is that highly intelligent individuals are smart enough to realize that PSB is rewarding in the long run. PSB is rewarding because the helper is more likely to be selected as a coalition partner or a mate (Millet & Dewitte, 2007; Zahavi, 1977).

Wednesday, April 17, 2019

A New Model For AI Ethics In R&D

Forbes Insight Team
Forbes.com
Originally posted March 27, 2019

Here is an excerpt:

The traditional ethics oversight and compliance model has two major problems, whether it is used in biomedical research or in AI. First, a list of guiding principles—whether four or 40—just summarizes important ethical concerns without resolving the conflicts between them.

Say, for example, that the development of a life-saving AI diagnostic tool requires access to large sets of personal data. The principle of respecting autonomy—that is, respecting every individual’s rational, informed, and voluntary decision making about herself and her life—would demand consent for using that data. But the principle of beneficence—that is, doing good—would require that this tool be developed as quickly as possible to help those who are suffering, even if this means neglecting consent. Any board relying solely on these principles for guidance will inevitably face an ethical conflict, because no hierarchy ranks these principles.

Second, decisions handed down by these boards are problematic in themselves. Ethics boards are far removed from researchers, acting as all-powerful decision-makers. Once ethics boards make a decision, typically no appeals process exists and no other authority can validate their decision. Without effective guiding principles and appropriate due process, this model uses ethics boards to police researchers. It implies that researchers cannot be trusted and it focuses solely on blocking what the boards consider to be unethical.

We can develop a better model for AI ethics, one in which ethics complements and enhances research and development and where researchers are trusted collaborators with ethicists. This requires shifting our focus from principles and boards to ethical reasoning and teamwork, from ethics policing to ethics integration.

The info is here.

Warnings of a Dark Side to A.I. in Health Care

Cade Metz and Craig S. Smith
The New York Times
Originally published March 21, 2019

Here is an excerpt:

An adversarial attack exploits a fundamental aspect of the way many A.I. systems are designed and built. Increasingly, A.I. is driven by neural networks, complex mathematical systems that learn tasks largely on their own by analyzing vast amounts of data.

By analyzing thousands of eye scans, for instance, a neural network can learn to detect signs of diabetic blindness. This “machine learning” happens on such an enormous scale — human behavior is defined by countless disparate pieces of data — that it can produce unexpected behavior of its own.

In 2016, a team at Carnegie Mellon used patterns printed on eyeglass frames to fool face-recognition systems into thinking the wearers were celebrities. When the researchers wore the frames, the systems mistook them for famous people, including Milla Jovovich and John Malkovich.

A group of Chinese researchers pulled a similar trick by projecting infrared light from the underside of a hat brim onto the face of whoever wore the hat. The light was invisible to the wearer, but it could trick a face-recognition system into thinking the wearer was, say, the musician Moby, who is Caucasian, rather than an Asian scientist.

Researchers have also warned that adversarial attacks could fool self-driving cars into seeing things that are not there. By making small changes to street signs, they have duped cars into detecting a yield sign instead of a stop sign.

The info is here.

Tuesday, April 16, 2019

Rise Of The Chief Ethics Officer

Forbes Insight Team
Forbes.com
Originally posted March 27, 2019

Here is an excerpt:

Robert Foehl is now executive-in-residence for business law and ethics at the Ohio University College of Business. In industry, he’s best known as the man who laid the ethical groundwork for Target as the company’s first director of corporate ethics.

At a company like Target, says Foehl, ethical issues arise every day. “This includes questions about where goods are sourced, how they are manufactured, the environment, justice and equality in the treatment of employees and customers, and obligations to the community,” he says. “In retail, the biggest issues tend to be around globalism and how we market to consumers. Are people being manipulated? Are we being discriminatory?”

For Foehl, all of these issues are just part of various ethical frameworks that he’s built over the years; complex philosophical frameworks that look at measures of happiness and suffering, the potential for individual harm, and even the impact of a decision on the “virtue” of the company. As he sees it, bringing a technology like AI into the mix has very little impact on that.

“The fact that you have an emerging technology doesn’t matter,” he says, “since you have thinking you can apply to any situation.” Whether it’s AI or big data or any other new tech, says Foehl, “we still put it into an ethical framework. Just because it involves a new technology doesn’t mean it’s a new ethical concept.”

The info is here.

Is there such a thing as moral progress?

John Danaher
Philosophical Disquisitions
Originally posted March 18, 2019

We often speak as if we believe in moral progress. We talk about recent moral changes, such as the legalisation of gay marriage, as ‘progressive’ moral changes. We express dismay at the ‘regressive’ moral views of racists and bigots. Some people (I’m looking at you Steven Pinker) have written long books that defend the idea that, although there have been setbacks, there has been a general upward trend in our moral attitudes over the course of human history. Martin Luther King once said that the arc of the moral universe is long but bend towards justice.

But does moral progress really exist? And how would we know if it did? Philosophers have puzzled over this question for some time. The problem is this. There is no doubt that there has been moral change over time, and there is no doubt that we often think of our moral views as being more advanced than those of our ancestors, but it is hard to see exactly what justifies this belief. It seems like you would need some absolute moral standard or goal against which you can measure moral change to justify that belief. Do we have such a thing?

In this post, I want offer some of my own, preliminary and underdeveloped, thoughts on the idea of moral progress. I do so by first clarifying the concept of moral progress, and then considering whether and when we can say that it exists. I will suggest that moral progress is real, and we are at least sometimes justified in saying that it has taken place. Nevertheless, there are some serious puzzles and conceptual difficulties with identifying some forms of moral progress.

The info is here.

Monday, April 15, 2019

Tech giants are seeking help on AI ethics. Where they seek it matters.

Dave Gershgorn
quartz.com
Originally posted March 30, 2019

Here is an excerpt:

Tech giants are starting to create mechanisms for outside experts to help them with AI ethics—but not always in the ways ethicists want. Google, for instance, announced the members of its new AI ethics council this week—such boards promise to be a rare opportunity for underrepresented groups to be heard. It faced criticism, however, for selecting Kay Coles James, the president of the conservative Heritage Foundation. James has made statements against the Equality Act, which would protect sexual orientation and gender identity as federally protected classes in the US. Those and other comments would seem to put her at odds with Google’s pitch as being a progressive and inclusive company. (Google declined Quartz’s request for comment.)

AI ethicist Joanna Bryson, one of the few members of Google’s new council who has an extensive background in the field, suggested that the inclusion of James helped the company make its ethics oversight more appealing to Republicans and conservative groups. Also on the council is Dyan Gibbens, who heads drone company Trumbull Unmanned and sat next to Donald Trump at a White House roundtable in 2017.

The info is here.

Death by a Thousand Clicks: Where Electronic Health Records Went Wrong

Erika Fry and Fred Schulte
Fortune.com
Originally posted on March 18, 2019

Here is an excerpt:

Damning evidence came from a whistleblower claim filed in 2011 against the company. Brendan Delaney, a British cop turned EHR expert, was hired in 2010 by New York City to work on the eCW implementation at Rikers Island, a jail complex that then had more than 100,000 inmates. But soon after he was hired, Delaney noticed scores of troubling problems with the system, which became the basis for his lawsuit. The patient medication lists weren’t reliable; prescribed drugs would not show up, while discontinued drugs would appear as current, according to the complaint. The EHR would sometimes display one patient’s medication profile accompanied by the physician’s note for a different patient, making it easy to misdiagnose or prescribe a drug to the wrong individual. Prescriptions, some 30,000 of them in 2010, lacked proper start and stop dates, introducing the opportunity for under- or overmedication. The eCW system did not reliably track lab results, concluded Delaney, who tallied 1,884 tests for which they had never gotten outcomes.

(cut)

Electronic health records were supposed to do a lot: make medicine safer, bring higher-quality care, empower patients, and yes, even save money. Boosters heralded an age when researchers could harness the big data within to reveal the most effective treatments for disease and sharply reduce medical errors. Patients, in turn, would have truly portable health records, being able to share their medical histories in a flash with doctors and hospitals anywhere in the country—essential when life-and-death decisions are being made in the ER.

But 10 years after President Barack Obama signed a law to accelerate the digitization of medical records—with the federal government, so far, sinking $36 billion into the effort—America has little to show for its investment.

The info is here.

Sunday, April 14, 2019

Scientists Grew a Mini-Brain in a Dish, And It Connected to a Spinal Cord by Itself

Carly Cassella
www.sciencealert.com
Originally posted March 20, 2019

Lab-growing the most complex structure in the known Universe may sound like an impossible task, but that hasn't stopped scientists from trying.

After years of work, researchers in the UK have now cultivated one of the most sophisticated miniature brains-in-a-dish yet, and it actually managed to behave in a slightly freaky fashion.

The grey blob was composed of about two million organised neurons, which is similar to the human foetal brain at 12 to 13 weeks. At this stage, this so-called 'brain organoid' is not complex enough to have any thoughts, feelings, or consciousness - but that doesn't make it entirely inert.

When placed next to a piece of mouse spinal cord and a piece of mouse muscle tissue, this disembodied, pea-sized blob of human brain cells sent out long, probing tendrils to check out its new neighbours.

Using long-term live microscopy, researchers were able to watch as the mini-brain spontaneously connected itself to the nearby spinal cord and muscle tissue.

The info is here.

Saturday, April 13, 2019

Nudging the better angels of our nature: A field experiment on morality and well-being.

Adam Waytz, & Wilhelm Hofmann
Emotion, Feb 28 , 2019, No Pagination Specified

Abstract

A field experiment examines how moral behavior, moral thoughts, and self-benefiting behavior affect daily well-being. Using experience sampling technology, we randomly grouped participants over 10 days to either behave morally, have moral thoughts, or do something positive for themselves. Participants received treatment-specific instructions in the morning of 5 days and no instructions on the other 5 control days. At each day’s end, participants completed measures that examined, among others, subjective well-being, self-perceived morality and empathy, and social isolation and closeness. Full analyses found limited evidence for treatment- versus control-day differences. However, restricting analyses to occasions on which participants complied with instructions revealed treatment- versus control-day main effects on all measures, while showing that self-perceived morality and empathy toward others particularly increased in the moral deeds and moral thoughts group. These findings suggest that moral behavior, moral thoughts, and self-benefiting behavior are all effective means of boosting well-being, but only moral deeds and, perhaps surprisingly, also moral thoughts strengthen the moral self-concept and empathy. Results from an additional study assessing laypeople’s predictions suggest that people do not fully intuit this pattern of results.

Here is part of the Discussion:

Overall, inducing moral thoughts and behaviors toward others enhanced feelings of virtuousness compared to the case for self-serving behavior. This makes sense given that people likely internalized their moral thoughts and behaviors in the two moral conditions, whereas the treat-yourself condition did not direct participants toward morality. Restricting analyses to days when people complied with treatment-specific instructions revealed significant positive effects on satisfaction for all treatments. That is, compared to receiving no instructions to behave morally, think morally, or treat oneself, receiving and complying with such instructions on treatment-specific days increased happiness and satisfaction with one’s life. Although the effect size was highest in the treat-yourself condition, improvements in satisfaction were statistically equivalent across conditions. Overall, the moral deeds condition in this compliant-only analysis revealed the broadest improvements across other measures related to well-being, whereas the treat-yourself condition was the only condition to significantly reduce exhaustion. Examining instances when participants reported behaving morally, thinking morally, or behaving self-servingly, independent of treatment, revealed comparable results for moral deeds and self-treats enhancing well-being generally, with moral thoughts enhancing most measures of well-being as well.

The research is here.

Friday, April 12, 2019

It’s Not Enough to Be Right—You Also Have to Be Kind

Ryan Holiday
www.medium.com
Originally posted on March 20, 2019

Here is an excerpt:

Reason is easy. Being clever is easy. Humiliating someone in the wrong is easy too. But putting yourself in their shoes, kindly nudging them to where they need to be, understanding that they have emotional and irrational beliefs just like you have emotional and irrational beliefs—that’s all much harder. So is not writing off other people. So is spending time working on the plank in your own eye than the splinter in theirs. We know we wouldn’t respond to someone talking to us that way, but we seem to think it’s okay to do it to other people.

There is a great clip of Joe Rogan talking during the immigration crisis last year. He doesn’t make some fact-based argument about whether immigration is or isn’t a problem. He doesn’t attack anyone on either side of the issue. He just talks about what it feels like—to him—to hear a mother screaming for the child she’s been separated from. The clip has been seen millions of times now and undoubtedly has changed more minds than a government shutdown, than the squabbles and fights on CNN, than the endless op-eds and think-tank reports.

Rogan doesn’t even tell anyone what to think. (Though, ironically, the clip was abused by plenty of editors who tried to make it partisan). He just says that if you can’t relate to that mom and her pain, you’re not on the right team. That’s the right way to think about it.

The info is here.

Not “burnout,” not moral injury—human rights violations

Pamela Wible
www.idealcare.org
Originally posted March 18, 2019

Here is an excerpt:

Moral injury now extends beyond combat veterans to include physicians in 2018 when Dean and Talbot announced their opposition and alternative to the label physician “burnout.” They believe (as I do) that physician cynicism, exhaustion, and decreased productivity are symptoms of a broken system. Economic forces, technological demands, and widespread intergenerational physician mental health wounds have culminated in a highly dysfunctional and toxic health care system in which we find ourselves in daily forced betrayal of our deepest values.

Manifestations of moral injury in victims include self-harm, poor self-care, substance abuse, recklessness, self-defeating behaviors, hopelessness, self-loathing, and decreased empathy. I’ve witnessed all far too frequently among physicians.

Yet moral injury is not an official diagnosis. No specific solutions are offered at medical institutions to combat physician moral injury though moral injury treatment among military may include listening circles (where veterans share battlefield stories), forgiveness rituals, and individual therapy. The fact is most victims of moral injury struggle on their own.

With no evidence-based treatments for physician moral injury and zero progress after forty years of burnout prevention, what next? Enter the real diagnosis—human rights violations—with clear evidence-based solutions.

The info is here.

Thursday, April 11, 2019

6 women sexually abused by counselor at women's rehab center Timberline Knolls, prosecutors say

David Jackson
The Chicago Tribune
Originally posted March 7, 2019

Here is an excerpt:

Cook County prosecutors allege that a Timberline Knolls counselor, Mike Jacksa, sexually assaulted or abused six patients last year at the leafy 43-acre rehab center in suburban Lemont. Former patients told police that Jacksa subjected them to rape, forced oral sex, digital penetration and fondling beneath their clothes. He faces 62 felony charges.

The abuse allegations began to surface last summer, but Timberline officials waited at least three weeks to contact law enforcement, police reports show. In the meantime, Timberline staff conducted internal investigations, twice suspending and reinstating Jacksa, police records show.

In early July, when Timberline staff discovered journal entries by a patient that described her sexual encounters with Jacksa, they confronted the woman in his presence, police reports show. Afterward, the woman “went back to her lodge and broke a mirror, intending to hurt herself or commit suicide over the embarrassment and emotional distress the whole situation with Jacksa had caused,” a Lemont police report said. “She was transported to a hospital.”

Widely accepted treatment standards say people who report sex crimes should not be forced to give their accounts in front of their alleged attackers.

Timberline Knolls suspended Jacksa a third time in early August, after the police got involved, then fired him Aug. 10. His alleged sexual attacks on patients were “an isolated incident,” said Timberline spokesman Gary Mack. “Facility administrators were greatly saddened by this whole situation and believed they acted swiftly and certainly to take Jacksa off the street.”

The info is here.

GAO urges more transparency of political appointments, compliance with agency ethics programs

Nicole Ogrysko
www.federalnewsnetwork.com
Originally posted March 15, 2019

The Government Accountability Office is urging Congress to require more transparency of agencies in collecting and publishing information on political appointees in the executive branch.

A new GAO study describing agencies’ struggles to track political appointees and their compliance with ethics programs reads, in a sense, like a greatest hits album of typical challenges pestering many facets of government.

Agency data tracking the appointments and departures of political officials, for example, is inconsistent and scattered across multiple systems and organizations. And challenges in recruitment, retention and training have led to persistent vacancies at departmental ethics offices.

“The public has an interest in knowing who is serving in the government and making policy decisions. The Office of Management and Budget stated that transparency promotes accountability by providing the public with information about what the government is doing,” GAO wrote. “Until the names of political appointees and their position, position type, agency or department name, start and end dates are publicly available at least quarterly, it will be difficult for the public to access comprehensive and reliable information.”

The info is here.

Wednesday, April 10, 2019

FDA Chief Scott Gottlieb Calls for Tighter Regulations on Electronic Health Records

Fred Schulte and Erika Fry
Fortune.com
Originally posted March 21, 2019

Food and Drug Administration Commissioner Scott Gottlieb on Wednesday called for tighter scrutiny of electronic health records systems, which have prompted thousands of reports of patient injuries and other safety problems over the past decade.

“What we really need is a much more tailored approach, so that we have appropriate oversight of EHRs when they’re doing things that could create risk for patients,” Gottlieb said in an interview with Kaiser Health News.

Gottlieb was responding to “Botched Operation,” a report published this week by KHN and Fortune. The investigation found that the federal government has spent more than $36 billion over the past 10 years to switch doctors and hospitals from paper to digital records systems. In that time, thousands of reports of deaths, injuries, and near misses linked to EHRs have piled up in databases—including at least one run by the FDA.

The info is here.

Gov. Newsom to order halt to California’s death penalty

Bob Egelko and Alexei Koseff
San Francisco Chronicle
Originally posted March 12, 2019

Gov. Gavin Newsom is suspending the death penalty in California, calling it discriminatory and immoral, and is granting reprieves to the 737 condemned inmates on the nation’s largest Death Row.

“I do not believe that a civilized society can claim to be a leader in the world as long as its government continues to sanction the premeditated and discriminatory execution of its people,” Newsom said in a statement accompanying an executive order, to be issued Wednesday, declaring a moratorium on capital punishment in the state. “The death penalty is inconsistent with our bedrock values and strikes at the very heart of what it means to be a Californian.”

He plans to order an immediate shutdown of the death chamber at San Quentin State Prison, where the last execution was carried out in 2006. Newsom is also withdrawing California’s recently revised procedures for executions by lethal injection, ending — at least for now — the struggle by prison officials for more than a decade to devise procedures that would pass muster in federal court by minimizing the risk of a botched and painful execution.

The info is here.

Tuesday, April 9, 2019

U.S. Ethics Office Declines to Certify Mnuchin’s Financial Disclosure

Alan Rappeport
The New York Times
Originally published April 4, 2019

The top federal ethics watchdog said on Thursday that Treasury Secretary Steven Mnuchin’s sale of his stake in a film production business to his wife did not comply with federal ethics rules, and it would not certify his 2018 financial disclosure report as a result.

Although Mr. Mnuchin will not face penalties for failing to comply, he has been required to rewrite his federal ethics agreement and to promise to recuse himself from government matters that could affect his wife’s business.

Mr. Mnuchin in 2017 sold his stake in StormChaser Partners to his then-fiancée, Louise Linton, as part of a series of divestments before becoming Treasury secretary. Since they are now married, government ethics rules consider the asset to be owned by Mr. Mnuchin, potentially creating a conflict of interest for an official who has been negotiating for expanded access for the movie industry as part of trade talks with China.

The controversy over Mr. Mnuchin’s finances has become an unwanted distraction in recent weeks as the Trump administration has been engaged in intense negotiations with China on a wide range of trade matters. While Robert Lighthizer, President Trump’s top trade official, has been leading the talks, Mr. Mnuchin has been the point person for promoting the film industry because of his background as a Hollywood producer and investor.

The info is here.

N.J. approves bill giving terminally ill people the right to end their lives

Susan Livio
www.nj.com
Originally posted March 25, 2019

New Jersey is poised to become the eighth state to allow doctors to write a lethal prescription for terminally ill patients who want to end their lives.

The state Assembly voted 41-33 with four abstentions Monday to pass the “Medical Aid in Dying for the Terminally Ill Act." Minutes later, the state Senate approved the bill 21-16.

Gov. Phil Murphy later issued a statement saying he would sign the measure into law.

“Allowing terminally ill and dying residents the dignity to make end-of-life decisions according to their own consciences is the right thing to do," the Democratic governor said. "I look forward to signing this legislation into law.”

The measure (A1504) would take effect four months after it is signed.

Susan Boyce, 55 of Rumson, smiled and wept after the final vote.

“I’ve been working on this quite a while," said Boyce, who is diagnosed with a terminal auto immune disease, Alpha-1 antitrypsin deficiency, and needs an oxygen tank to breathe.

The info is here.

Monday, April 8, 2019

Mark Zuckerberg And The Tech World Still Do Not Understand Ethics

Derek Lidow
Forbes.com
Originally posted March 11, 2018

Here is an excerpt:

Expectations for technology startups encourage expedient, not ethical, decision making. 

As people in the industry are fond of saying, the tech world moves at “lightspeed.” That includes the pace of innovation, the rise and fall of markets, the speed of customer adoption, the evolution of business models and the lifecycles of companies. Decisions must be made quickly and leaders too often choose the most expedient path regardless of whether it is safe, legal or ethical.

 This “move fast and break things” ethos is embodied in practices like working toward a minimum viable product (MVP), helping to establish a bias toward cutting corners. In addition, many founders look for CFOs who are “tech trained—that is, people accustomed to a world where time and money wait for no one—as opposed to a seasoned financial officer with good accounting chops and a moral compass.

The host of scandals at Zenefits, a cloud-based provider of employee-benefits software to small businesses and once one of the most promising of Silicon Valley startups, had their origins in the shortcuts the company took in order to meet unreasonably high expectations for growth. The founder apparently created software that helped employees cheat on California’s online broker license course. As the company expanded rapidly, it began hiring people with little experience in the highly regulated health insurance industry. As the company moved from small businesses to larger businesses, the strain on it software increased. Instead of developing appropriate software, the company hired more people to manually take up the slack where the existing software failed. When the founder was asked by an interviewer before the scandals why he was so intent on expanding so rapidly he replied, “Slowing down doesn’t feel like something I want to do.”

The info is here.

Officials gather for ethics training

Jon Wysochanski
Star Beacon
Originally posted March 23, 2019

Here is an excerpt:

A large range of actions can constitute unethical behavior, from a health inspector inspecting his mom and dad’s restaurant to a public official accepting a ticket to an Ohio State Buckeyes’ game because he doesn’t consider it monetary, Willeke said. Unethical behavior doesn’t have to be as egregious as the real world example of a state employee inspecting a string of daycare centers she and her husband owned.

It’s not possible to find someone void of personal bias, Willeke said, and it is common for potential conflicts of interest to present themselves. It’s how public officials react to those biases or potential conflicts that matters most. The best thing for a public official facing a conflict to do is to walk away from the situation.

“Having a conflict of interest has never been illegal,” Willeke said. “It is when people act on those conflicts of interest that we actually see a crime under Ohio Ethics Law.”

When it comes to accepting gifts, Ohio law does not stipulate a dollar amount, only whether the gift is substantial or improper. A vendor-purchased dinner at Bob Evans might not violate the law, while dinner at a high-end restaurant complete with the best wine and most expensive menu items would.

And when it comes to unlawful interests in public contracts, a contract means any time a government entity spends money. That could mean the trustee who takes home a township backhoe on weekends to do work on the side, the library director who uses the copier to print hundreds of flyers for their business, the state employee who uses a state computer to run a real estate business or the fireman who uses a ladder truck on a home painting job.

The info is here.

Editor's note: We need more of this type of training for government officials.

Sunday, April 7, 2019

In Spain, prisoners’ brains are being electrically stimulated in the name of science

Sigal Samuel
vox.com
Originally posted March 9, 2019

A team of scientists in Spain is getting ready to experiment on prisoners. If the scientists get the necessary approvals, they plan to start a study this month that involves placing electrodes on inmates’ foreheads and sending a current into their brains. The electricity will target the prefrontal cortex, a brain region that plays a role in decision-making and social behavior. The idea is that stimulating more activity in that region may make the prisoners less aggressive.

This technique — transcranial direct current stimulation, or tDCS — is a form of neurointervention, meaning it acts directly on the brain. Using neurointerventions in the criminal justice system is highly controversial. In recent years, scientists and philosophers have been debating under what conditions (if any) it might be ethical.

The Spanish team is the first to use tDCS on prisoners. They’ve already done it in a pilot study, publishing their findings in Neuroscience in January, and they were all set to implement a follow-up study involving at least 12 convicted murderers and other inmates this month. On Wednesday, New Scientist broke news of the upcoming experiment, noting that it had approval from the Spanish government, prison officials, and a university ethics committee. The next day, the Interior Ministry changed course and put the study on hold.

Andrés Molero-Chamizo, a psychologist at the University of Huelva and the lead researcher behind the study, told me he’s trying to find out what led to the government’s unexpected decision. He said it makes sense to run such an experiment on inmates because “prisoners have a high level of aggressiveness.”

The info is here.

Saturday, April 6, 2019

Wit et al. vs. United Behavioral Health and Alexander et al. vs. United Behavioral Health

U.S. Federal Court Finds United Healthcare Affiliate Illegally Denied Mental Health and Substance Use Coverage in Nationwide Class Action

  • Landmark Case Challenges the Nation’s Largest Mental Health Insurance Company for Unlawful, Systematic Claims Denials – and Wins
  • Groundbreaking Ruling Affects Certified Classes of Tens of Thousands of Patients, Including Thousands of Children and Teenagers 
  • Judge Rules, “At every level of care that is at issue in this case, there is an excessive emphasis on addressing acute symptoms and stabilizing crises while ignoring the effective treatment of members’ underlying conditions.”

In a landmark mental health ruling, a federal court held today that health insurance giant United Behavioral Health (UBH), which serves over 60 million members and is owned by UnitedHealth Group, used flawed internal guidelines to unlawfully deny mental health and substance use treatment for its insureds across the United States. The historic class action was filed by Psych-Appeal, Inc. and Zuckerman Spaeder LLP, and litigated in the U.S. District Court for the Northern District of California.

The federal court found that, to promote its own bottom line, UBH denied claims based on internally developed medical necessity criteria that were far more restrictive than generally accepted standards for behavioral health care. Specifically, the court found that UBH’s criteria were skewed to cover “acute” treatment, which is short-term or crisis-focused, and disregarded chronic or complex mental health conditions that often require ongoing care.

The court was particularly troubled by UBH’s lack of coverage criteria for children and adolescents, estimated to number in the thousands in the certified classes.

“For far too long, patients and their families have been stretched to the breaking point, both financially and emotionally, as they battle with insurers for the mental health coverage promised by their health plans,” said Meiram Bendat of Psych-Appeal, Inc. and co-counsel for the plaintiffs who uncovered the guideline flaws. “Now a court has ruled that denying coverage based on defective medical necessity criteria is illegal.”

In its decision, the court also held that UBH misled regulators about its guidelines being consistent with the American Society of Addiction Medicine (ASAM) criteria, which insurers must use in Connecticut, Illinois and Rhode Island. Additionally, the court found that UBH failed to apply Texas-mandated substance use criteria for at least a portion of the class period.

The legal opinion is here.

Friday, April 5, 2019

A Prominent Economist’s Death Prompts Talk of Mental Health in the Professoriate

Emma Pettit
The Chronicle of Higher Education
Originally posted March 19, 2019

Reaching Out

For Bruce Macintosh, Krueger’s death was a reminder of how isolating academe can be. Macintosh is a professor of physics at Stanford University who was employed at a national laboratory, not a university, until about five years ago. That culture was totally different, he said. At other workplaces, Macintosh said, you interact regularly with peers and supervisors, who are paying close attention to you and your work.

“There’s nothing like that in an academic environment,” he said. “You can shut down completely for a year, and no one will notice,” as long as the grades get turned in.

It seems, Macintosh said, as if there should be multiple layers of support within a university department to help faculty members who experience depression or other forms of mental illness. But certain barriers still exist between professors and the resources they need.

A 2017 survey of 267 faculty members with mental-health histories or mental illnesses found that most respondents had little to no familiarity with accommodations at their institution. Even fewer reported using them.

The info is here.

Note: Career success, wealth, and prestige are not protective factors for suicide attempts or completions.  Interpersonal connections to family and friends, access to quality mental health care, problem-solving skills, meaning in life, and purposefulness are.

Ordinary people associate addiction with loss of free will

A. J. Vonasch, C. J. Clark, S. Laub, K. D. Vohs, & R. F. Baumeister
Addictive Behaviors Reports
Volume 5, June 2017, Pages 56-66

Introduction
It is widely believed that addiction entails a loss of free will, even though this point is controversial among scholars. There is arguably a downside to this belief, in that addicts who believe they lack the free will to quit an addiction might therefore fail to quit an addiction.

Methods
A correlational study tested the relationship between belief in free will and addiction. Follow-up studies tested steps of a potential mechanism: 1) people think drugs undermine free will 2) people believe addiction undermines free will more when doing so serves the self 3) disbelief in free will leads people to perceive various temptations as more addictive.

Results
People with lower belief in free will were more likely to have a history of addiction to alcohol and other drugs, and also less likely to have successfully quit alcohol. People believe that drugs undermine free will, and they use this belief to self-servingly attribute less free will to their bad actions than to good ones. Low belief in free will also increases perceptions that things are addictive.

Conclusions
Addiction is widely seen as loss of free will. The belief can be used in self-serving ways that may undermine people's efforts to quit.

The research is here.

Thursday, April 4, 2019

Confucian Ethics as Role-Based Ethics

A. T. Nuyen
International Philosophical Quarterly
Volume 47, Issue 3, September 2007, 315-328.

Abstract

For many commentators, Confucian ethics is a kind of virtue ethics. However, there is enough textual evidence to suggest that it can be interpreted as an ethics based on rules, consequentialist as well as deontological. Against these views, I argue that Confucian ethics is based on the roles that make an agent the person he or she is. Further, I argue that in Confucianism the question of what it is that a person ought to do cannot be separated from the question of what it is to be a person, and that the latter is answered in terms of the roles that arise from the network of social relationships in which a person stands. This does not mean that Confucian ethics is unlike anything found in Western philosophy. Indeed, I show that many Western thinkers have advanced a view of ethics similar to the Confucian ethics as I interpret it.

The info is here.

I’m a Journalist. Apparently, I’m Also One of America’s “Top Doctors.”

Marshall Allen
Propublica.org
Originally posted Feb. 28, 2019

Here is an excerpt:

And now, for reasons still unclear, Top Doctor Awards had chosen me — and I was almost perfectly the wrong person to pick. I’ve spent the last 13 years reporting on health care, a good chunk of it examining how our health care system measures the quality of doctors. Medicine is complex, and there’s no simple way of saying some doctors are better than others. Truly assessing the performance of doctors, from their diagnostic or surgical outcomes to the satisfaction of their patients, is challenging work. And yet, for-profit companies churn out lists of “Super” or “Top” or “Best” physicians all the time, displaying them in magazine ads, online listings or via shiny plaques or promotional videos the companies produce for an added fee.

On my call with Anne from Top Doctors, the conversation took a surreal turn.

“It says you work for a company called ProPublica,” she said, blithely. At least she had that right.

I responded that I did and that I was actually a journalist, not a doctor. Is that going to be a problem? I asked. Or can you still give me the “Top Doctor” award?

There was a pause. Clearly, I had thrown a baffling curve into her script. She quickly regrouped. “Yes,” she decided, I could have the award.

Anne’s bonus, I thought, must be volume based.

Then we got down to business. The honor came with a customized plaque, with my choice of cherry wood with gold trim or black with chrome trim. I mulled over which vibe better fit my unique brand of medicine: the more traditional cherry or the more modern black?

The info is here.

Wednesday, April 3, 2019

Artificial Morality

Robert Koehler
www.citywathcla.com
Originally posted March 21, 2019

Here is an excerpt:

What I see here is moral awakening scrambling for sociopolitical traction: Employees are standing for something larger than sheer personal interests, in the process pushing the Big Tech brass to think beyond their need for an endless flow of capital, consequences be damned.

This is happening across the country. A movement is percolating: Tech won’t build it!

“Across the technology industry,” the New York Times reported in October, “rank-and-file employees are demanding greater insight into how their companies are deploying the technology that they built. At Google, Amazon, Microsoft and Salesforce, as well as at tech start-ups, engineers and technologists are increasingly asking whether the products they are working on are being used for surveillance in places like China or for military projects in the United States or elsewhere.

“That’s a change from the past, when Silicon Valley workers typically developed products with little questioning about the social costs.”

What if moral thinking — not in books and philosophical tracts, but in the real world, both corporate and political — were as large and complex as technical thinking? It could no longer hide behind the cliché of the just war (and surely the next one we’re preparing for will be just), but would have to evaluate war itself — all wars, including the ones of the past 70 years or so, in the fullness of their costs and consequences — as well as look ahead to the kind of future we could create, depending on what decisions we make today.

Complex moral thinking doesn’t ignore the need to survive, financially and otherwise, in the present moment, but it stays calm in the face of that need and sees survival as a collective, not a competitive, enterprise.

The info is here.

Feeling Good: Integrating the Psychology and Epistemology of Moral Intuition and Emotion

Hossein Dabbagh
Journal of Cognition and Neuroethics 5 (3): 1–30.

Abstract

Is the epistemology of moral intuitions compatible with admitting a role for emotion? I argue in this paper thatmoral intuitions and emotions can be partners without creating an epistemic threat. I start off by offering some empirical findings to weaken Singer’s (and Greene’s and Haidt’s) debunking argument against moral intuition, which treat emotions as a distorting factor. In the second part of the paper, I argue that the standard contrast between intuition and emotion is a mistake. Moral intuitions and emotions are not contestants if we construe moral intuition as non-doxastic intellectual seeming and emotion as a non-doxastic perceptual-like state. This will show that emotions support, rather than distort, the epistemic standing of moral intuitions.

Here is an excerpt:

However, cognitive sciences, as I argued above, show us that seeing all emotions in this excessively pessimistic way is not plausible. To think about emotional experience as always being a source of epistemic distortion would be wrong. On the contrary, there are some reasons to believe that emotional experiences can sometimes make a positive contribution to our activities in practical rationality. So, there is a possibility that some emotions are not distorting factors. If this is right, we are no longer justified in saying that emotions always distort our epistemic activities. Instead, emotions (construed as quasiperceptual experiences) might have some cognitive elements assessable for rationality.

The paper is here.

Tuesday, April 2, 2019

Former Patient Coordinator Pleads Guilty to Wrongfully Disclosing Health Information to Cause Harm

Department of Justice
U.S. Attorney’s Office
Western District of Pennsylvania
Originally posted March 6, 2019

A resident of Butler, Pennsylvania, pleaded guilty in federal court to a charge of wrongfully disclosing the health information of another individual, United States Attorney Scott W. Brady announced today.

Linda Sue Kalina, 61, pleaded guilty to one count before United States District Judge Arthur J. Schwab.

In connection with the guilty plea, the court was advised that Linda Sue Kalina worked, from March 7, 2016 through June 23, 2017, as a Patient Information Coordinator with UPMC and its affiliate, Tri Rivers Musculoskeletal Centers (TRMC) in Mars, Pennsylvania, and that during her employment, contrary to the requirements of the Health Insurance Portability and Accountability Act (HIPAA) improperly accessed the individual health information of 111 UPMC patients who had never been provided services at TRMC. Specifically, on August 11, 2017, Kalina unlawfully disclosed personal gynecological health information related to two such patients, with the intent to cause those individuals embarrassment and mental distress.

Judge Schwab scheduled sentencing for June 25, 2019, at 10 a.m. The law provides for a total sentence of 10 years in prison, a fine of $250,000, or both. Under the Federal Sentencing Guidelines, the actual sentence imposed is based upon the seriousness of the offense and the prior criminal history, if any, of the defendant. Kalina remains on bonding pending the sentencing hearing.

Assistant United States Attorney Carolyn J. Bloch is prosecuting this case on behalf of the government.

The Federal Bureau of Investigation conducted the investigation that led to the prosecution of Kalina.

Will You Forgive Your Supervisor’s Wrongdoings? The Moral Licensing Effect of Ethical Leader Behaviors

Rong Wang and Darius K.-S. Chan
Front. Psychol., 05 March 2019
https://doi.org/10.3389/fpsyg.2019.00484

Abstract

Moral licensing theory suggests that observers may liberate actors to behave in morally questionable ways due to the actors’ history of moral behaviors. Drawing on this view, a scenario experiment with a 2 (high vs. low ethical) × 2 (internal vs. external motivation) between-subject design (N = 455) was conducted in the current study. We examined whether prior ethical leader behaviors cause subordinates to license subsequent abusive supervision, as well as the moderating role of behavior motivation on such effects. The results showed that when supervisors demonstrated prior ethical behaviors, subordinates, as victims, liberated them to act in abusive ways. Specifically, subordinates showed high levels of tolerance and low levels of condemnation toward abusive supervision and seldom experienced emotional responses to supervisors’ abusive behaviors. Moreover, subordinates tended to attribute abusive supervision, viewed as a kind of mistreatment without an immediate intent to cause harm, to characteristics of the victims and of the organization rather than of the supervisors per se. When supervisors behaved morally out of internal rather than external motivations, the aforementioned licensing effects were stronger.

Here is a portion of the Discussion

The main findings of this research have some implications for organizational practice. Subordinates have a tendency to liberate leaders’ morally questionable behaviors after observing leaders’ prior ethical behaviors, which may tolerate and even encourage the existence of destructive leadership styles. First, organizations can take steps including training and interventions to strengthen ethical climate. Organizations’ ethical climate is not only helpful to manage the ethical behaviors within the organizations, but also has impact on shaping organizational members’ zero-tolerance attitude to leaders’ mistreatments and questionable behaviors (Bartels et al., 1998).

Monday, April 1, 2019

Psychiatrist suspended for ‘inappropriate relationship.’ He got a $196K state job.

Steve Contorno & Lawrence Mower
www.miamiherald.com
Originally posted February 28, 2019

Less than a year ago, Domingo Cerra Fernandez was suspended from practicing medicine in the state of Florida.

The Ocala psychiatrist allegedly committed one of the cardinal sins of his discipline: He propositioned a patient to have a sexual and romantic relationship with him. He then continued to treat her.

But just months after his Florida suspension ended, Cerra Fernandez has a new job. He’s a senior physician at the North Florida Evaluation and Treatment Center, a maximum-security state-run treatment facility for mentally disabled adult male patients.

How did a recently suspended psychiatrist find himself working with some of Florida’s most vulnerable and dangerous residents, with a $196,000 annual salary?

The Department of Children and Families, which runs the facility, knew about his case before hiring him to a job that had been vacant for more than a year. DaMonica Smith, a department spokeswoman, told the Herald/Times that Cerra Fernandez was up front about his discipline.

The info is here.

Neuroscience Readies for a Showdown Over Consciousness Ideas

Philip Ball
Quanta Magazine
Originally published March 6, 2019

Here is an excerpt:

Philosophers have debated the nature of consciousness and whether it can inhere in things other than humans for thousands of years, but in the modern era, pressing practical and moral implications make the need for answers more urgent. As artificial intelligence (AI) grows increasingly sophisticated, it might become impossible to tell whether one is dealing with a machine or a human  merely by interacting with it — the classic Turing test. But would that mean AI deserves moral consideration?

Understanding consciousness also impinges on animal rights and welfare, and on a wide range of medical and legal questions about mental impairments. A group of more than 50 leading neuroscientists, psychologists, cognitive scientists and others recently called for greater recognition of the importance of research on this difficult subject. “Theories of consciousness need to be tested rigorously and revised repeatedly amid the long process of accumulation of empirical evidence,” the authors said, adding that “myths and speculative conjectures also need to be identified as such.”

You can hardly do experiments on consciousness without having first defined it. But that’s already difficult because we use the word in several ways. Humans are conscious beings, but we can lose consciousness, for example under anesthesia. We can say we are conscious of something — a strange noise coming out of our laptop, say. But in general, the quality of consciousness refers to a capacity to experience one’s existence rather than just recording it or responding to stimuli like an automaton. Philosophers of mind often refer to this as the principle that one can meaningfully speak about what it is to be “like” a conscious being — even if we can never actually have that experience beyond ourselves.

The info is here.