Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, January 21, 2020

How Could Commercial Terms of Use and Privacy Policies Undermine Informed Consent in the Age of Mobile Health?

AMA J Ethics. 2018;20(9):E864-872.
doi: 10.1001/amajethics.2018.864.

Abstract

Granular personal data generated by mobile health (mHealth) technologies coupled with the complexity of mHealth systems creates risks to privacy that are difficult to foresee, understand, and communicate, especially for purposes of informed consent. Moreover, commercial terms of use, to which users are almost always required to agree, depart significantly from standards of informed consent. As data use scandals increasingly surface in the news, the field of mHealth must advocate for user-centered privacy and informed consent practices that motivate patients’ and research participants’ trust. We review the challenges and relevance of informed consent and discuss opportunities for creating new standards for user-centered informed consent processes in the age of mHealth.

The info is here.

10 Years Ago, DNA Tests Were The Future Of Medicine. Now They’re A Social Network — And A Data Privacy Mess

Peter Aldhaus
buzzfeednews.com
Originally posted 11 Dec 19

Here is an excerpt:

But DNA testing can reveal uncomfortable truths, too. Families have been torn apart by the discovery that the man they call “Dad” is not the biological father of his children. Home DNA tests can also be used to show that a relative is a rapist or a killer.

That possibility burst into the public consciousness in April 2018, with the arrest of Joseph James DeAngelo, alleged to be the Golden State Killer responsible for at least 13 killings and more than 50 rapes in the 1970s and 1980s. DeAngelo was finally tracked down after DNA left at the scene of a 1980 double murder was matched to people in GEDmatch who were the killer's third or fourth cousins. Through months of painstaking work, investigators working with the genealogist Barbara Rae-Venter built family trees that converged on DeAngelo.

Genealogists had long realized that databases like GEDmatch could be used in this way, but had been wary of working with law enforcement — fearing that DNA test customers would object to the idea of cops searching their DNA profiles and rummaging around in their family trees.

But the Golden State Killer’s crimes were so heinous that the anticipated backlash initially failed to materialize. Indeed, a May 2018 survey of more than 1,500 US adults found that 80% backed police using public genealogy databases to solve violent crimes.

“I was very surprised with the Golden State Killer case how positive the reaction was across the board,” CeCe Moore, a genealogist known for her appearances on TV, told BuzzFeed News a couple of months after DeAngelo’s arrest.

The info is here.

Monday, January 20, 2020

Chinese court sentences 'gene-editing' scientist to three years in prison

Huizhong Wu and Lusha Zhan
kfgo.com
Originally posted 29 Dec 19

A Chinese court sentenced the scientist who created the world's first "gene-edited" babies to three years in prison on Monday for illegally practising medicine and violating research regulations, the official Xinhua news agency said.

In November 2018, He Jiankui, then an associate professor at Southern University of Science and Technology in Shenzhen, said he had used gene-editing technology known as CRISPR-Cas9 to change the genes of twin girls to protect them from getting infected with the AIDS virus in the future.

The backlash in China and globally about the ethics of his research and work was fast and widespread.

Xinhua said He and his collaborators forged ethical review materials and recruited men with AIDS who were part of a couple to carry out the gene-editing. His experiments, it said, resulted in two women giving birth to three gene-edited babies.

The court also handed lesser sentences to Zhang Renli and Qin Jinzhou, who worked at two unnamed medical institutions, for having conspired with He in his work.

The info is here.

What Is Prudent Governance of Human Genome Editing?

Scott J. Schweikart
AMA J Ethics. 2019;21(12):E1042-1048.
doi: 10.1001/amajethics.2019.1042.

Abstract

CRISPR technology has made questions about how best to regulate human genome editing immediately relevant. A sound and ethical governance structure for human genome editing is necessary, as the consequences of this new technology are far-reaching and profound. Because there are currently many risks associated with genome editing technology, the extent of which are unknown, regulatory prudence is ideal. When considering how best to create a prudent governance scheme, we can look to 2 guiding examples: the Asilomar conference of 1975 and the German Ethics Council guidelines for human germline intervention. Both models offer a path towards prudent regulation in the face of unknown and significant risks.

Here is an excerpt:

Beyond this key distinction, the potential risks and consequences—both to individuals and society—of human genome editing are relevant to ethical considerations of nonmaleficence, beneficence, justice, and respect for autonomy and are thus also relevant to the creation of an appropriate regulatory model. Because genome editing technology is at its beginning stages, it poses safety risks, the off-target effects of CRISPR being one example. Another issue is whether gene editing is done for therapeutic or enhancement purposes. While either purpose can prove beneficial, enhancement has potential for abuse.
Moreover, concerns exist that genome editing for enhancement can thwart social justice, as wealthy people will likely have greater ability to enhance their genome (and thus presumably certain physical and mental characteristics), furthering social and class divides. With regards to germline editing, a relevant concern is how, during the informed consent process, to respect the autonomy of persons in future generations whose genomes are modified before birth. The questions raised by genome editing are profound, and the risks—both to the individual and to society—are evident. Left without proper governance, significant harmful consequences are possible.

The info is here.

Sunday, January 19, 2020

A Right to a Human Decision

Aziz Z. Huq
Virginia Law Review, Vol. 105
U of Chicago, Public Law Working Paper No. 713

Abstract

Recent advances in computational technologies have spurred anxiety about a shift of power from human to machine decision-makers. From prison sentences to loan approvals to college applications, corporate and state actors increasingly lean on machine learning tools (a subset of artificial intelligence) to allocate goods and to assign coercion. Machine-learning tools are perceived to be eclipsing, even extinguishing, human agency in ways that sacrifice important individual interests. An emerging legal response to such worries is a right to a human decision. European law has already embraced the idea in the General Data Protection Regulation. American law, especially in the criminal justice domain, is already moving in the same direction. But no jurisdiction has defined with precision what that right entails, or furnished a clear justification for its creation.


This Article investigates the legal possibilities of a right to a human decision. I first define the conditions of technological plausibility for that right as applied against state action. To understand its technological predicates, I specify the margins along which machine decisions are distinct from human ones. Such technological contextualization enables a nuanced exploration of why, or indeed whether, the gaps that do separate human and machine decisions might have normative import. Based on this technological accounting, I then analyze the normative stakes of a right to a human decision. I consider three potential normative justifications: (a) an appeal to individual interests to participation and reason-giving; (b) worries about the insufficiently reasoned or individuated quality of state action; and (c) arguments based on negative externalities. A careful analysis of these three grounds suggests that there is no general justification for adopting a right to a human decision by the state. Normative concerns about insufficiently reasoned or accurate decisions, which have a particularly powerful hold on the legal imagination, are best addressed in other ways. Similarly, concerns about the ways that algorithmic tools create asymmetries of social power are not parried by a right to a human decision. Indeed, rather than firmly supporting a right to a human decision, available evidence tentatively points toward a countervailing ‘right to a well-calibrated machine decision’ as ultimately more normatively well- grounded.

The paper can be downloaded here.

Saturday, January 18, 2020

Could a Rising Robot Workforce Make Humans Less Prejudiced?

Jackson, J., Castelo, N. & Gray, K. (2019).
American Psychologist. (2019)

Automation is becoming ever more prevalent, with robot workers replacing many human employees. Many perspectives have examined the economic impact of a robot workforce, but here we consider its social impact: how will the rise of robot workers affect intergroup relations? Whereas some past research suggests that more robots will lead to more intergroup prejudice, we suggest that robots could also reduce prejudice by highlighting commonalities between all humans. As robot workers become more salient, intergroup differences—including racial and religious differences—may seem less important, fostering a perception of a common human identity (i.e., “panhumanism.”) Six studies (∑N= 3,312) support this hypothesis. Anxiety about the rising robot workforce predicts less anxiety about human out-groups (Study 1) and priming the salience of a robot workforce reduces prejudice towards out-groups (Study 2), makes people more accepting of out-group members as leaders and family members (Study 3), and increases wage equality across in-group and out-group members in an economic simulation (Study 4). This effect is mediated by panhumanism (Studies 5-6), suggesting that the perception of a common human in-group explains why robot salience reduces prejudice. We discuss why automation may sometimes exacerbate intergroup tensions and other-times reduce them.

From the General Discussion

An open question remains about when automation helps versus harms intergroup relations. Our evidence is optimistic, showing that robot workers can increase solidarity between human groups. Yet other studies are pessimistic, showing that reminders of rising automation can increase people’s perceived material insecurity, leading them to feel more threatened by immigrants and foreign workers (Im et al., in press; Frey, Berger, & Chen, 2017), and data that we gathered across 37 nations—summarized in our supplemental materials—suggest that the countries that have automated the fastest over the last 42 years have also increased more in explicit prejudice towards out-groups, an effect that is partially explained by rising unemployment rates.

The research is here.

Friday, January 17, 2020

'DNA is not your destiny': Genetics a poor indicator of health

Nicole Bergot
Edmonton Journal
Originally posted 18 Dec 19

The vast majority of diseases, including many cancers, diabetes, and Alzheimer’s, have a genetic contribution of just five to 10 per cent, shows the meta-analysis of data from studies that examine relationships between common gene mutations, or single nucleotide polymorphisms (SNPs), and different conditions.

“Simply put, DNA is not your destiny, and SNPs are duds for disease prediction,” said study co-author David Wishart, professor in the department of biological sciences and the department of computing science.

But there are exceptions, including Crohn’s disease, celiac disease, and macular degeneration, which have a genetic contribution of approximately 40 to 50 per cent.

“Despite these rare exceptions, it is becoming increasingly clear that the risks for getting most diseases arise from your metabolism, your environment, your lifestyle, or your exposure to various kinds of nutrients, chemicals, bacteria, or viruses,” said Wishart.

The info is here.

Consciousness is real

Image result for consciousnessMassimo Pigliucci
aeon.com
Originally published 16 Dec 19

Here is an excerpt:

Here is where the fundamental divide in philosophy of mind occurs, between ‘dualists’ and ‘illusionists’. Both camps agree that there is more to consciousness than the access aspect and, moreover, that phenomenal consciousness seems to have nonphysical properties (the ‘what is it like’ thing). From there, one can go in two very different directions: the scientific horn of the dilemma, attempting to explain how science might provide us with a satisfactory account of phenomenal consciousness, as Frankish does; or the antiscientific horn, claiming that phenomenal consciousness is squarely outside the domain of competence of science, as David Chalmers has been arguing for most of his career, for instance in his book The Conscious Mind (1996).

By embracing the antiscientific position, Chalmers & co are forced to go dualist. Dualism is the notion that physical and mental phenomena are somehow irreconcilable, two different kinds of beasts, so to speak. Classically, dualism concerns substances: according to René Descartes, the body is made of physical stuff (in Latin, res extensa), while the mind is made of mental stuff (in Latin, res cogitans). Nowadays, thanks to our advances in both physics and biology, nobody takes substance dualism seriously anymore. The alternative is something called property dualism, which acknowledges that everything – body and mind – is made of the same basic stuff (quarks and so forth), but that this stuff somehow (notice the vagueness here) changes when things get organised into brains and special properties appear that are nowhere else to be found in the material world. (For more on the difference between property and substance dualism, see Scott Calef’s definition.)

The ‘illusionists’, by contrast, take the scientific route, accepting physicalism (or materialism, or some other similar ‘ism’), meaning that they think – with modern science – not only that everything is made of the same basic kind of stuff, but that there are no special barriers separating physical from mental phenomena. However, since these people agree with the dualists that phenomenal consciousness seems to be spooky, the only option open to them seems to be that of denying the existence of whatever appears not to be physical. Hence the notion that phenomenal consciousness is a kind of illusion.

The essay is here.

Thursday, January 16, 2020

Ethics In AI: Why Values For Data Matter

Ethics in AIMarc Teerlink
forbes.com
Originally posted 18 Dec 19

Here is an excerpt:

Data Is an Asset, and It Must Have Values

Already, 22% of U.S. companies have attributed part of their profits to AI and advanced cases of (AI infused) predictive analytics.

According to a recent study SAP conducted in conjunction with the Economist’s Intelligent Unit, organizations doing the most with machine learning have experienced 43% more growth on average versus those who aren’t using AI and ML at all — or not using AI well.

One of their secrets: They treat data as an asset. The same way organizations treat inventory, fleet, and manufacturing assets.

They start with clear data governance with executive ownership and accountability (for a concrete example of how this looks, here are some principles and governance models that we at SAP apply in our daily work).

So, do treat data as an asset, because, no matter how powerful the algorithm, poor training data will limit the effectiveness of Artificial Intelligence and Predictive Analytics.

The info is here.