Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, April 17, 2019

A New Model For AI Ethics In R&D

Forbes Insight Team
Forbes.com
Originally posted March 27, 2019

Here is an excerpt:

The traditional ethics oversight and compliance model has two major problems, whether it is used in biomedical research or in AI. First, a list of guiding principles—whether four or 40—just summarizes important ethical concerns without resolving the conflicts between them.

Say, for example, that the development of a life-saving AI diagnostic tool requires access to large sets of personal data. The principle of respecting autonomy—that is, respecting every individual’s rational, informed, and voluntary decision making about herself and her life—would demand consent for using that data. But the principle of beneficence—that is, doing good—would require that this tool be developed as quickly as possible to help those who are suffering, even if this means neglecting consent. Any board relying solely on these principles for guidance will inevitably face an ethical conflict, because no hierarchy ranks these principles.

Second, decisions handed down by these boards are problematic in themselves. Ethics boards are far removed from researchers, acting as all-powerful decision-makers. Once ethics boards make a decision, typically no appeals process exists and no other authority can validate their decision. Without effective guiding principles and appropriate due process, this model uses ethics boards to police researchers. It implies that researchers cannot be trusted and it focuses solely on blocking what the boards consider to be unethical.

We can develop a better model for AI ethics, one in which ethics complements and enhances research and development and where researchers are trusted collaborators with ethicists. This requires shifting our focus from principles and boards to ethical reasoning and teamwork, from ethics policing to ethics integration.

The info is here.

Warnings of a Dark Side to A.I. in Health Care

Cade Metz and Craig S. Smith
The New York Times
Originally published March 21, 2019

Here is an excerpt:

An adversarial attack exploits a fundamental aspect of the way many A.I. systems are designed and built. Increasingly, A.I. is driven by neural networks, complex mathematical systems that learn tasks largely on their own by analyzing vast amounts of data.

By analyzing thousands of eye scans, for instance, a neural network can learn to detect signs of diabetic blindness. This “machine learning” happens on such an enormous scale — human behavior is defined by countless disparate pieces of data — that it can produce unexpected behavior of its own.

In 2016, a team at Carnegie Mellon used patterns printed on eyeglass frames to fool face-recognition systems into thinking the wearers were celebrities. When the researchers wore the frames, the systems mistook them for famous people, including Milla Jovovich and John Malkovich.

A group of Chinese researchers pulled a similar trick by projecting infrared light from the underside of a hat brim onto the face of whoever wore the hat. The light was invisible to the wearer, but it could trick a face-recognition system into thinking the wearer was, say, the musician Moby, who is Caucasian, rather than an Asian scientist.

Researchers have also warned that adversarial attacks could fool self-driving cars into seeing things that are not there. By making small changes to street signs, they have duped cars into detecting a yield sign instead of a stop sign.

The info is here.

Tuesday, April 16, 2019

Rise Of The Chief Ethics Officer

Forbes Insight Team
Forbes.com
Originally posted March 27, 2019

Here is an excerpt:

Robert Foehl is now executive-in-residence for business law and ethics at the Ohio University College of Business. In industry, he’s best known as the man who laid the ethical groundwork for Target as the company’s first director of corporate ethics.

At a company like Target, says Foehl, ethical issues arise every day. “This includes questions about where goods are sourced, how they are manufactured, the environment, justice and equality in the treatment of employees and customers, and obligations to the community,” he says. “In retail, the biggest issues tend to be around globalism and how we market to consumers. Are people being manipulated? Are we being discriminatory?”

For Foehl, all of these issues are just part of various ethical frameworks that he’s built over the years; complex philosophical frameworks that look at measures of happiness and suffering, the potential for individual harm, and even the impact of a decision on the “virtue” of the company. As he sees it, bringing a technology like AI into the mix has very little impact on that.

“The fact that you have an emerging technology doesn’t matter,” he says, “since you have thinking you can apply to any situation.” Whether it’s AI or big data or any other new tech, says Foehl, “we still put it into an ethical framework. Just because it involves a new technology doesn’t mean it’s a new ethical concept.”

The info is here.

Is there such a thing as moral progress?

John Danaher
Philosophical Disquisitions
Originally posted March 18, 2019

We often speak as if we believe in moral progress. We talk about recent moral changes, such as the legalisation of gay marriage, as ‘progressive’ moral changes. We express dismay at the ‘regressive’ moral views of racists and bigots. Some people (I’m looking at you Steven Pinker) have written long books that defend the idea that, although there have been setbacks, there has been a general upward trend in our moral attitudes over the course of human history. Martin Luther King once said that the arc of the moral universe is long but bend towards justice.

But does moral progress really exist? And how would we know if it did? Philosophers have puzzled over this question for some time. The problem is this. There is no doubt that there has been moral change over time, and there is no doubt that we often think of our moral views as being more advanced than those of our ancestors, but it is hard to see exactly what justifies this belief. It seems like you would need some absolute moral standard or goal against which you can measure moral change to justify that belief. Do we have such a thing?

In this post, I want offer some of my own, preliminary and underdeveloped, thoughts on the idea of moral progress. I do so by first clarifying the concept of moral progress, and then considering whether and when we can say that it exists. I will suggest that moral progress is real, and we are at least sometimes justified in saying that it has taken place. Nevertheless, there are some serious puzzles and conceptual difficulties with identifying some forms of moral progress.

The info is here.

Monday, April 15, 2019

Tech giants are seeking help on AI ethics. Where they seek it matters.

Dave Gershgorn
quartz.com
Originally posted March 30, 2019

Here is an excerpt:

Tech giants are starting to create mechanisms for outside experts to help them with AI ethics—but not always in the ways ethicists want. Google, for instance, announced the members of its new AI ethics council this week—such boards promise to be a rare opportunity for underrepresented groups to be heard. It faced criticism, however, for selecting Kay Coles James, the president of the conservative Heritage Foundation. James has made statements against the Equality Act, which would protect sexual orientation and gender identity as federally protected classes in the US. Those and other comments would seem to put her at odds with Google’s pitch as being a progressive and inclusive company. (Google declined Quartz’s request for comment.)

AI ethicist Joanna Bryson, one of the few members of Google’s new council who has an extensive background in the field, suggested that the inclusion of James helped the company make its ethics oversight more appealing to Republicans and conservative groups. Also on the council is Dyan Gibbens, who heads drone company Trumbull Unmanned and sat next to Donald Trump at a White House roundtable in 2017.

The info is here.

Death by a Thousand Clicks: Where Electronic Health Records Went Wrong

Erika Fry and Fred Schulte
Fortune.com
Originally posted on March 18, 2019

Here is an excerpt:

Damning evidence came from a whistleblower claim filed in 2011 against the company. Brendan Delaney, a British cop turned EHR expert, was hired in 2010 by New York City to work on the eCW implementation at Rikers Island, a jail complex that then had more than 100,000 inmates. But soon after he was hired, Delaney noticed scores of troubling problems with the system, which became the basis for his lawsuit. The patient medication lists weren’t reliable; prescribed drugs would not show up, while discontinued drugs would appear as current, according to the complaint. The EHR would sometimes display one patient’s medication profile accompanied by the physician’s note for a different patient, making it easy to misdiagnose or prescribe a drug to the wrong individual. Prescriptions, some 30,000 of them in 2010, lacked proper start and stop dates, introducing the opportunity for under- or overmedication. The eCW system did not reliably track lab results, concluded Delaney, who tallied 1,884 tests for which they had never gotten outcomes.

(cut)

Electronic health records were supposed to do a lot: make medicine safer, bring higher-quality care, empower patients, and yes, even save money. Boosters heralded an age when researchers could harness the big data within to reveal the most effective treatments for disease and sharply reduce medical errors. Patients, in turn, would have truly portable health records, being able to share their medical histories in a flash with doctors and hospitals anywhere in the country—essential when life-and-death decisions are being made in the ER.

But 10 years after President Barack Obama signed a law to accelerate the digitization of medical records—with the federal government, so far, sinking $36 billion into the effort—America has little to show for its investment.

The info is here.

Sunday, April 14, 2019

Scientists Grew a Mini-Brain in a Dish, And It Connected to a Spinal Cord by Itself

Carly Cassella
www.sciencealert.com
Originally posted March 20, 2019

Lab-growing the most complex structure in the known Universe may sound like an impossible task, but that hasn't stopped scientists from trying.

After years of work, researchers in the UK have now cultivated one of the most sophisticated miniature brains-in-a-dish yet, and it actually managed to behave in a slightly freaky fashion.

The grey blob was composed of about two million organised neurons, which is similar to the human foetal brain at 12 to 13 weeks. At this stage, this so-called 'brain organoid' is not complex enough to have any thoughts, feelings, or consciousness - but that doesn't make it entirely inert.

When placed next to a piece of mouse spinal cord and a piece of mouse muscle tissue, this disembodied, pea-sized blob of human brain cells sent out long, probing tendrils to check out its new neighbours.

Using long-term live microscopy, researchers were able to watch as the mini-brain spontaneously connected itself to the nearby spinal cord and muscle tissue.

The info is here.

Saturday, April 13, 2019

Nudging the better angels of our nature: A field experiment on morality and well-being.

Adam Waytz, & Wilhelm Hofmann
Emotion, Feb 28 , 2019, No Pagination Specified

Abstract

A field experiment examines how moral behavior, moral thoughts, and self-benefiting behavior affect daily well-being. Using experience sampling technology, we randomly grouped participants over 10 days to either behave morally, have moral thoughts, or do something positive for themselves. Participants received treatment-specific instructions in the morning of 5 days and no instructions on the other 5 control days. At each day’s end, participants completed measures that examined, among others, subjective well-being, self-perceived morality and empathy, and social isolation and closeness. Full analyses found limited evidence for treatment- versus control-day differences. However, restricting analyses to occasions on which participants complied with instructions revealed treatment- versus control-day main effects on all measures, while showing that self-perceived morality and empathy toward others particularly increased in the moral deeds and moral thoughts group. These findings suggest that moral behavior, moral thoughts, and self-benefiting behavior are all effective means of boosting well-being, but only moral deeds and, perhaps surprisingly, also moral thoughts strengthen the moral self-concept and empathy. Results from an additional study assessing laypeople’s predictions suggest that people do not fully intuit this pattern of results.

Here is part of the Discussion:

Overall, inducing moral thoughts and behaviors toward others enhanced feelings of virtuousness compared to the case for self-serving behavior. This makes sense given that people likely internalized their moral thoughts and behaviors in the two moral conditions, whereas the treat-yourself condition did not direct participants toward morality. Restricting analyses to days when people complied with treatment-specific instructions revealed significant positive effects on satisfaction for all treatments. That is, compared to receiving no instructions to behave morally, think morally, or treat oneself, receiving and complying with such instructions on treatment-specific days increased happiness and satisfaction with one’s life. Although the effect size was highest in the treat-yourself condition, improvements in satisfaction were statistically equivalent across conditions. Overall, the moral deeds condition in this compliant-only analysis revealed the broadest improvements across other measures related to well-being, whereas the treat-yourself condition was the only condition to significantly reduce exhaustion. Examining instances when participants reported behaving morally, thinking morally, or behaving self-servingly, independent of treatment, revealed comparable results for moral deeds and self-treats enhancing well-being generally, with moral thoughts enhancing most measures of well-being as well.

The research is here.

Friday, April 12, 2019

It’s Not Enough to Be Right—You Also Have to Be Kind

Ryan Holiday
www.medium.com
Originally posted on March 20, 2019

Here is an excerpt:

Reason is easy. Being clever is easy. Humiliating someone in the wrong is easy too. But putting yourself in their shoes, kindly nudging them to where they need to be, understanding that they have emotional and irrational beliefs just like you have emotional and irrational beliefs—that’s all much harder. So is not writing off other people. So is spending time working on the plank in your own eye than the splinter in theirs. We know we wouldn’t respond to someone talking to us that way, but we seem to think it’s okay to do it to other people.

There is a great clip of Joe Rogan talking during the immigration crisis last year. He doesn’t make some fact-based argument about whether immigration is or isn’t a problem. He doesn’t attack anyone on either side of the issue. He just talks about what it feels like—to him—to hear a mother screaming for the child she’s been separated from. The clip has been seen millions of times now and undoubtedly has changed more minds than a government shutdown, than the squabbles and fights on CNN, than the endless op-eds and think-tank reports.

Rogan doesn’t even tell anyone what to think. (Though, ironically, the clip was abused by plenty of editors who tried to make it partisan). He just says that if you can’t relate to that mom and her pain, you’re not on the right team. That’s the right way to think about it.

The info is here.