Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, April 18, 2019

Google cancels AI ethics board in response to outcry

Kelsey Piper
www.Vox.com
Originally published April 4, 2019

his week, Vox and other outlets reported that Google’s newly created AI ethics board was falling apart amid controversy over several of the board members.

Well, it’s officially done falling apart — it’s been canceled. Google told Vox on Thursday that it’s pulling the plug on the ethics board.

The board survived for barely more than one week. Founded to guide “responsible development of AI” at Google, it would have had eight members and met four times over the course of 2019 to consider concerns about Google’s AI program. Those concerns include how AI can enable authoritarian states, how AI algorithms produce disparate outcomes, whether to work on military applications of AI, and more. But it ran into problems from the start.

Thousands of Google employees signed a petition calling for the removal of one board member, Heritage Foundation president Kay Coles James, over her comments about trans people and her organization’s skepticism of climate change. Meanwhile, the inclusion of drone company CEO Dyan Gibbens reopened old divisions in the company over the use of the company’s AI for military applications.

The info is here.

Why are smarter individuals more prosocial? A study on the mediating roles of empathy and moral identity

Qingke Guoa, Peng Suna, Minghang Caia, Xiling Zhang, & Kexin Song
Intelligence
Volume 75, July–August 2019, Pages 1-8

Abstract

The purpose of this study is to examine whether there is an association between intelligence and prosocial behavior (PSB), and whether this association is mediated by empathy and moral identity. Chinese version of the Raven's Standard Progressive Matrices, the Self-Report Altruism Scale Distinguished by the Recipient, Interpersonal Reactivity Index, and the Internalization subscale of the Self-Importance of Moral Identity Scale were administered to 518 (N female = 254, M age = 19.79) undergraduate students. The results showed that fluid intelligence was significantly correlated with self-reported PSB; moral identity, perspective taking, and empathic concern could account for the positive association between intelligence and PSB; the mediation effects of moral identity and empathy were consistent across gender.

The article is here.

Here is part of the Discussion:

This is consistent with previous findings that highly intelligent individuals are more likely to engage in prosocial and civic activities (Aranda & Siyaranamual, 2014; Bekkers & Wiepking, 2011; Wiepking & Maas, 2009). One explanation of the intelligence-prosocial association is that highly intelligent individuals are better able to perceive and understand the desires and feelings of the person in need, and are quicker in making proper decisions and figuring out which behaviors should be enacted (Eisenberg et al., 2015; Gottfredson, 1997). Another explanation is that highly intelligent individuals are smart enough to realize that PSB is rewarding in the long run. PSB is rewarding because the helper is more likely to be selected as a coalition partner or a mate (Millet & Dewitte, 2007; Zahavi, 1977).

Wednesday, April 17, 2019

A New Model For AI Ethics In R&D

Forbes Insight Team
Forbes.com
Originally posted March 27, 2019

Here is an excerpt:

The traditional ethics oversight and compliance model has two major problems, whether it is used in biomedical research or in AI. First, a list of guiding principles—whether four or 40—just summarizes important ethical concerns without resolving the conflicts between them.

Say, for example, that the development of a life-saving AI diagnostic tool requires access to large sets of personal data. The principle of respecting autonomy—that is, respecting every individual’s rational, informed, and voluntary decision making about herself and her life—would demand consent for using that data. But the principle of beneficence—that is, doing good—would require that this tool be developed as quickly as possible to help those who are suffering, even if this means neglecting consent. Any board relying solely on these principles for guidance will inevitably face an ethical conflict, because no hierarchy ranks these principles.

Second, decisions handed down by these boards are problematic in themselves. Ethics boards are far removed from researchers, acting as all-powerful decision-makers. Once ethics boards make a decision, typically no appeals process exists and no other authority can validate their decision. Without effective guiding principles and appropriate due process, this model uses ethics boards to police researchers. It implies that researchers cannot be trusted and it focuses solely on blocking what the boards consider to be unethical.

We can develop a better model for AI ethics, one in which ethics complements and enhances research and development and where researchers are trusted collaborators with ethicists. This requires shifting our focus from principles and boards to ethical reasoning and teamwork, from ethics policing to ethics integration.

The info is here.

Warnings of a Dark Side to A.I. in Health Care

Cade Metz and Craig S. Smith
The New York Times
Originally published March 21, 2019

Here is an excerpt:

An adversarial attack exploits a fundamental aspect of the way many A.I. systems are designed and built. Increasingly, A.I. is driven by neural networks, complex mathematical systems that learn tasks largely on their own by analyzing vast amounts of data.

By analyzing thousands of eye scans, for instance, a neural network can learn to detect signs of diabetic blindness. This “machine learning” happens on such an enormous scale — human behavior is defined by countless disparate pieces of data — that it can produce unexpected behavior of its own.

In 2016, a team at Carnegie Mellon used patterns printed on eyeglass frames to fool face-recognition systems into thinking the wearers were celebrities. When the researchers wore the frames, the systems mistook them for famous people, including Milla Jovovich and John Malkovich.

A group of Chinese researchers pulled a similar trick by projecting infrared light from the underside of a hat brim onto the face of whoever wore the hat. The light was invisible to the wearer, but it could trick a face-recognition system into thinking the wearer was, say, the musician Moby, who is Caucasian, rather than an Asian scientist.

Researchers have also warned that adversarial attacks could fool self-driving cars into seeing things that are not there. By making small changes to street signs, they have duped cars into detecting a yield sign instead of a stop sign.

The info is here.

Tuesday, April 16, 2019

Rise Of The Chief Ethics Officer

Forbes Insight Team
Forbes.com
Originally posted March 27, 2019

Here is an excerpt:

Robert Foehl is now executive-in-residence for business law and ethics at the Ohio University College of Business. In industry, he’s best known as the man who laid the ethical groundwork for Target as the company’s first director of corporate ethics.

At a company like Target, says Foehl, ethical issues arise every day. “This includes questions about where goods are sourced, how they are manufactured, the environment, justice and equality in the treatment of employees and customers, and obligations to the community,” he says. “In retail, the biggest issues tend to be around globalism and how we market to consumers. Are people being manipulated? Are we being discriminatory?”

For Foehl, all of these issues are just part of various ethical frameworks that he’s built over the years; complex philosophical frameworks that look at measures of happiness and suffering, the potential for individual harm, and even the impact of a decision on the “virtue” of the company. As he sees it, bringing a technology like AI into the mix has very little impact on that.

“The fact that you have an emerging technology doesn’t matter,” he says, “since you have thinking you can apply to any situation.” Whether it’s AI or big data or any other new tech, says Foehl, “we still put it into an ethical framework. Just because it involves a new technology doesn’t mean it’s a new ethical concept.”

The info is here.

Is there such a thing as moral progress?

John Danaher
Philosophical Disquisitions
Originally posted March 18, 2019

We often speak as if we believe in moral progress. We talk about recent moral changes, such as the legalisation of gay marriage, as ‘progressive’ moral changes. We express dismay at the ‘regressive’ moral views of racists and bigots. Some people (I’m looking at you Steven Pinker) have written long books that defend the idea that, although there have been setbacks, there has been a general upward trend in our moral attitudes over the course of human history. Martin Luther King once said that the arc of the moral universe is long but bend towards justice.

But does moral progress really exist? And how would we know if it did? Philosophers have puzzled over this question for some time. The problem is this. There is no doubt that there has been moral change over time, and there is no doubt that we often think of our moral views as being more advanced than those of our ancestors, but it is hard to see exactly what justifies this belief. It seems like you would need some absolute moral standard or goal against which you can measure moral change to justify that belief. Do we have such a thing?

In this post, I want offer some of my own, preliminary and underdeveloped, thoughts on the idea of moral progress. I do so by first clarifying the concept of moral progress, and then considering whether and when we can say that it exists. I will suggest that moral progress is real, and we are at least sometimes justified in saying that it has taken place. Nevertheless, there are some serious puzzles and conceptual difficulties with identifying some forms of moral progress.

The info is here.

Monday, April 15, 2019

Tech giants are seeking help on AI ethics. Where they seek it matters.

Dave Gershgorn
quartz.com
Originally posted March 30, 2019

Here is an excerpt:

Tech giants are starting to create mechanisms for outside experts to help them with AI ethics—but not always in the ways ethicists want. Google, for instance, announced the members of its new AI ethics council this week—such boards promise to be a rare opportunity for underrepresented groups to be heard. It faced criticism, however, for selecting Kay Coles James, the president of the conservative Heritage Foundation. James has made statements against the Equality Act, which would protect sexual orientation and gender identity as federally protected classes in the US. Those and other comments would seem to put her at odds with Google’s pitch as being a progressive and inclusive company. (Google declined Quartz’s request for comment.)

AI ethicist Joanna Bryson, one of the few members of Google’s new council who has an extensive background in the field, suggested that the inclusion of James helped the company make its ethics oversight more appealing to Republicans and conservative groups. Also on the council is Dyan Gibbens, who heads drone company Trumbull Unmanned and sat next to Donald Trump at a White House roundtable in 2017.

The info is here.

Death by a Thousand Clicks: Where Electronic Health Records Went Wrong

Erika Fry and Fred Schulte
Fortune.com
Originally posted on March 18, 2019

Here is an excerpt:

Damning evidence came from a whistleblower claim filed in 2011 against the company. Brendan Delaney, a British cop turned EHR expert, was hired in 2010 by New York City to work on the eCW implementation at Rikers Island, a jail complex that then had more than 100,000 inmates. But soon after he was hired, Delaney noticed scores of troubling problems with the system, which became the basis for his lawsuit. The patient medication lists weren’t reliable; prescribed drugs would not show up, while discontinued drugs would appear as current, according to the complaint. The EHR would sometimes display one patient’s medication profile accompanied by the physician’s note for a different patient, making it easy to misdiagnose or prescribe a drug to the wrong individual. Prescriptions, some 30,000 of them in 2010, lacked proper start and stop dates, introducing the opportunity for under- or overmedication. The eCW system did not reliably track lab results, concluded Delaney, who tallied 1,884 tests for which they had never gotten outcomes.

(cut)

Electronic health records were supposed to do a lot: make medicine safer, bring higher-quality care, empower patients, and yes, even save money. Boosters heralded an age when researchers could harness the big data within to reveal the most effective treatments for disease and sharply reduce medical errors. Patients, in turn, would have truly portable health records, being able to share their medical histories in a flash with doctors and hospitals anywhere in the country—essential when life-and-death decisions are being made in the ER.

But 10 years after President Barack Obama signed a law to accelerate the digitization of medical records—with the federal government, so far, sinking $36 billion into the effort—America has little to show for its investment.

The info is here.

Sunday, April 14, 2019

Scientists Grew a Mini-Brain in a Dish, And It Connected to a Spinal Cord by Itself

Carly Cassella
www.sciencealert.com
Originally posted March 20, 2019

Lab-growing the most complex structure in the known Universe may sound like an impossible task, but that hasn't stopped scientists from trying.

After years of work, researchers in the UK have now cultivated one of the most sophisticated miniature brains-in-a-dish yet, and it actually managed to behave in a slightly freaky fashion.

The grey blob was composed of about two million organised neurons, which is similar to the human foetal brain at 12 to 13 weeks. At this stage, this so-called 'brain organoid' is not complex enough to have any thoughts, feelings, or consciousness - but that doesn't make it entirely inert.

When placed next to a piece of mouse spinal cord and a piece of mouse muscle tissue, this disembodied, pea-sized blob of human brain cells sent out long, probing tendrils to check out its new neighbours.

Using long-term live microscopy, researchers were able to watch as the mini-brain spontaneously connected itself to the nearby spinal cord and muscle tissue.

The info is here.