Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, January 20, 2020

Chinese court sentences 'gene-editing' scientist to three years in prison

Huizhong Wu and Lusha Zhan
kfgo.com
Originally posted 29 Dec 19

A Chinese court sentenced the scientist who created the world's first "gene-edited" babies to three years in prison on Monday for illegally practising medicine and violating research regulations, the official Xinhua news agency said.

In November 2018, He Jiankui, then an associate professor at Southern University of Science and Technology in Shenzhen, said he had used gene-editing technology known as CRISPR-Cas9 to change the genes of twin girls to protect them from getting infected with the AIDS virus in the future.

The backlash in China and globally about the ethics of his research and work was fast and widespread.

Xinhua said He and his collaborators forged ethical review materials and recruited men with AIDS who were part of a couple to carry out the gene-editing. His experiments, it said, resulted in two women giving birth to three gene-edited babies.

The court also handed lesser sentences to Zhang Renli and Qin Jinzhou, who worked at two unnamed medical institutions, for having conspired with He in his work.

The info is here.

What Is Prudent Governance of Human Genome Editing?

Scott J. Schweikart
AMA J Ethics. 2019;21(12):E1042-1048.
doi: 10.1001/amajethics.2019.1042.

Abstract

CRISPR technology has made questions about how best to regulate human genome editing immediately relevant. A sound and ethical governance structure for human genome editing is necessary, as the consequences of this new technology are far-reaching and profound. Because there are currently many risks associated with genome editing technology, the extent of which are unknown, regulatory prudence is ideal. When considering how best to create a prudent governance scheme, we can look to 2 guiding examples: the Asilomar conference of 1975 and the German Ethics Council guidelines for human germline intervention. Both models offer a path towards prudent regulation in the face of unknown and significant risks.

Here is an excerpt:

Beyond this key distinction, the potential risks and consequences—both to individuals and society—of human genome editing are relevant to ethical considerations of nonmaleficence, beneficence, justice, and respect for autonomy and are thus also relevant to the creation of an appropriate regulatory model. Because genome editing technology is at its beginning stages, it poses safety risks, the off-target effects of CRISPR being one example. Another issue is whether gene editing is done for therapeutic or enhancement purposes. While either purpose can prove beneficial, enhancement has potential for abuse.
Moreover, concerns exist that genome editing for enhancement can thwart social justice, as wealthy people will likely have greater ability to enhance their genome (and thus presumably certain physical and mental characteristics), furthering social and class divides. With regards to germline editing, a relevant concern is how, during the informed consent process, to respect the autonomy of persons in future generations whose genomes are modified before birth. The questions raised by genome editing are profound, and the risks—both to the individual and to society—are evident. Left without proper governance, significant harmful consequences are possible.

The info is here.

Sunday, January 19, 2020

A Right to a Human Decision

Aziz Z. Huq
Virginia Law Review, Vol. 105
U of Chicago, Public Law Working Paper No. 713

Abstract

Recent advances in computational technologies have spurred anxiety about a shift of power from human to machine decision-makers. From prison sentences to loan approvals to college applications, corporate and state actors increasingly lean on machine learning tools (a subset of artificial intelligence) to allocate goods and to assign coercion. Machine-learning tools are perceived to be eclipsing, even extinguishing, human agency in ways that sacrifice important individual interests. An emerging legal response to such worries is a right to a human decision. European law has already embraced the idea in the General Data Protection Regulation. American law, especially in the criminal justice domain, is already moving in the same direction. But no jurisdiction has defined with precision what that right entails, or furnished a clear justification for its creation.


This Article investigates the legal possibilities of a right to a human decision. I first define the conditions of technological plausibility for that right as applied against state action. To understand its technological predicates, I specify the margins along which machine decisions are distinct from human ones. Such technological contextualization enables a nuanced exploration of why, or indeed whether, the gaps that do separate human and machine decisions might have normative import. Based on this technological accounting, I then analyze the normative stakes of a right to a human decision. I consider three potential normative justifications: (a) an appeal to individual interests to participation and reason-giving; (b) worries about the insufficiently reasoned or individuated quality of state action; and (c) arguments based on negative externalities. A careful analysis of these three grounds suggests that there is no general justification for adopting a right to a human decision by the state. Normative concerns about insufficiently reasoned or accurate decisions, which have a particularly powerful hold on the legal imagination, are best addressed in other ways. Similarly, concerns about the ways that algorithmic tools create asymmetries of social power are not parried by a right to a human decision. Indeed, rather than firmly supporting a right to a human decision, available evidence tentatively points toward a countervailing ‘right to a well-calibrated machine decision’ as ultimately more normatively well- grounded.

The paper can be downloaded here.

Saturday, January 18, 2020

Could a Rising Robot Workforce Make Humans Less Prejudiced?

Jackson, J., Castelo, N. & Gray, K. (2019).
American Psychologist. (2019)

Automation is becoming ever more prevalent, with robot workers replacing many human employees. Many perspectives have examined the economic impact of a robot workforce, but here we consider its social impact: how will the rise of robot workers affect intergroup relations? Whereas some past research suggests that more robots will lead to more intergroup prejudice, we suggest that robots could also reduce prejudice by highlighting commonalities between all humans. As robot workers become more salient, intergroup differences—including racial and religious differences—may seem less important, fostering a perception of a common human identity (i.e., “panhumanism.”) Six studies (∑N= 3,312) support this hypothesis. Anxiety about the rising robot workforce predicts less anxiety about human out-groups (Study 1) and priming the salience of a robot workforce reduces prejudice towards out-groups (Study 2), makes people more accepting of out-group members as leaders and family members (Study 3), and increases wage equality across in-group and out-group members in an economic simulation (Study 4). This effect is mediated by panhumanism (Studies 5-6), suggesting that the perception of a common human in-group explains why robot salience reduces prejudice. We discuss why automation may sometimes exacerbate intergroup tensions and other-times reduce them.

From the General Discussion

An open question remains about when automation helps versus harms intergroup relations. Our evidence is optimistic, showing that robot workers can increase solidarity between human groups. Yet other studies are pessimistic, showing that reminders of rising automation can increase people’s perceived material insecurity, leading them to feel more threatened by immigrants and foreign workers (Im et al., in press; Frey, Berger, & Chen, 2017), and data that we gathered across 37 nations—summarized in our supplemental materials—suggest that the countries that have automated the fastest over the last 42 years have also increased more in explicit prejudice towards out-groups, an effect that is partially explained by rising unemployment rates.

The research is here.

Friday, January 17, 2020

'DNA is not your destiny': Genetics a poor indicator of health

Nicole Bergot
Edmonton Journal
Originally posted 18 Dec 19

The vast majority of diseases, including many cancers, diabetes, and Alzheimer’s, have a genetic contribution of just five to 10 per cent, shows the meta-analysis of data from studies that examine relationships between common gene mutations, or single nucleotide polymorphisms (SNPs), and different conditions.

“Simply put, DNA is not your destiny, and SNPs are duds for disease prediction,” said study co-author David Wishart, professor in the department of biological sciences and the department of computing science.

But there are exceptions, including Crohn’s disease, celiac disease, and macular degeneration, which have a genetic contribution of approximately 40 to 50 per cent.

“Despite these rare exceptions, it is becoming increasingly clear that the risks for getting most diseases arise from your metabolism, your environment, your lifestyle, or your exposure to various kinds of nutrients, chemicals, bacteria, or viruses,” said Wishart.

The info is here.

Consciousness is real

Image result for consciousnessMassimo Pigliucci
aeon.com
Originally published 16 Dec 19

Here is an excerpt:

Here is where the fundamental divide in philosophy of mind occurs, between ‘dualists’ and ‘illusionists’. Both camps agree that there is more to consciousness than the access aspect and, moreover, that phenomenal consciousness seems to have nonphysical properties (the ‘what is it like’ thing). From there, one can go in two very different directions: the scientific horn of the dilemma, attempting to explain how science might provide us with a satisfactory account of phenomenal consciousness, as Frankish does; or the antiscientific horn, claiming that phenomenal consciousness is squarely outside the domain of competence of science, as David Chalmers has been arguing for most of his career, for instance in his book The Conscious Mind (1996).

By embracing the antiscientific position, Chalmers & co are forced to go dualist. Dualism is the notion that physical and mental phenomena are somehow irreconcilable, two different kinds of beasts, so to speak. Classically, dualism concerns substances: according to René Descartes, the body is made of physical stuff (in Latin, res extensa), while the mind is made of mental stuff (in Latin, res cogitans). Nowadays, thanks to our advances in both physics and biology, nobody takes substance dualism seriously anymore. The alternative is something called property dualism, which acknowledges that everything – body and mind – is made of the same basic stuff (quarks and so forth), but that this stuff somehow (notice the vagueness here) changes when things get organised into brains and special properties appear that are nowhere else to be found in the material world. (For more on the difference between property and substance dualism, see Scott Calef’s definition.)

The ‘illusionists’, by contrast, take the scientific route, accepting physicalism (or materialism, or some other similar ‘ism’), meaning that they think – with modern science – not only that everything is made of the same basic kind of stuff, but that there are no special barriers separating physical from mental phenomena. However, since these people agree with the dualists that phenomenal consciousness seems to be spooky, the only option open to them seems to be that of denying the existence of whatever appears not to be physical. Hence the notion that phenomenal consciousness is a kind of illusion.

The essay is here.

Thursday, January 16, 2020

Ethics In AI: Why Values For Data Matter

Ethics in AIMarc Teerlink
forbes.com
Originally posted 18 Dec 19

Here is an excerpt:

Data Is an Asset, and It Must Have Values

Already, 22% of U.S. companies have attributed part of their profits to AI and advanced cases of (AI infused) predictive analytics.

According to a recent study SAP conducted in conjunction with the Economist’s Intelligent Unit, organizations doing the most with machine learning have experienced 43% more growth on average versus those who aren’t using AI and ML at all — or not using AI well.

One of their secrets: They treat data as an asset. The same way organizations treat inventory, fleet, and manufacturing assets.

They start with clear data governance with executive ownership and accountability (for a concrete example of how this looks, here are some principles and governance models that we at SAP apply in our daily work).

So, do treat data as an asset, because, no matter how powerful the algorithm, poor training data will limit the effectiveness of Artificial Intelligence and Predictive Analytics.

The info is here.

Inaccurate group meta-perceptions drive negative out-group attributions in competitive contexts

Lees, J., Cikara, M.
Nat Hum Behav (2019)

Abstract

Across seven experiments and one survey (N=4282) people consistently overestimated out-group negativity towards the collective behavior of their in-group. This negativity bias in group meta-perception was present across multiple competitive (but not cooperative) intergroup contexts, and appears to be yoked to group psychology more generally; we observed negativity bias for estimation of out-group, anonymized-group, and even fellow in-group members’ perceptions. Importantly, in the context of American politics greater inaccuracy was associated with increased belief that the out-group is motivated by purposeful obstructionism. However, an intervention that informed participants of the inaccuracy of their beliefs reduced negative out-group attributions, and was more effective for those whose group meta-perceptions were more inaccurate. In sum, we highlight a pernicious bias in social judgments of how we believe ‘they’ see ‘our’ behavior, demonstrate how such inaccurate beliefs can exacerbate intergroup conflict, and provide an avenue for reducing the negative effects of inaccuracy.

From the Discussion

Our findings highlight a consistent, pernicious inaccuracy in social perception, along withhow these inaccurate perceptions relate to negative attributions towards out-groups. More broadly, inaccurate and overly negative GMPs exist across multiple competitive intergroup contexts, and we find no evidence they differ across the political spectrum. This suggests that there may be many domains of intergroup interaction where inaccurate GMPs could potentially diminish the likelihood of cooperation and instead exacerbate the possibility of conflict. However, our findings also highlight a straight-forward manner in which simply informing individuals of their inaccurate beliefs can reduce these negative attributions.

A version of the research can be downloaded here.

Wednesday, January 15, 2020

French Executives Found Responsible for 35 Employees' Deaths by Suicide

Katie Way
vice.com
Originally posted 20 Dec 19

Today, in a landmark case for worker’s rights and workplace accountability, three former executives of telecommunication company Orange (formerly known as France Télécom) were charged with “collective moral harassment” after creating a work environment which was found to have directly contributed to the death by suicide of 35 employees. This included, according to NPR , 19 employees who died by suicide between 2008 and 2009, many of whom “left notes blaming the company or who killed themselves at work.”

Why would a company lead a terror campaign against its own workers? Money, of course: The plan was enacted as part of a push to get rid of 22,000 employees in order to counterbalance $50 million in debt incurred after the company privatized—it was formerly a piece of the French government’s Ministry of Posts and Telecommunications, meaning its employees were granted special protection as civil servants that prevented their higher-ups from firing them. According to the New York Times, the executives attempted to solve this dilemma by creating an “atmosphere of fear” and purposefully stoked “severe anxiety” in order to drive workers to quit. Former CEO Didier Lombard, sentenced to four months in jail and a $16,000 fine, reportedly called the strategies part of a plan to get rid of unwanted employees “either through the window or through the door.” Way to say the quiet part loud, Monsieur!