Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, July 31, 2019

US Senators Call for International Guidelines for Germline Editing

Jef Akst
www.the-scientist.com
Originally published July 16, 2019

Here is an excerpt:

“Gene editing is a powerful technology that has the potential to lead to new therapies for devastating and previously untreatable diseases,” Feinstein says in a statement. “However, like any new technology, there is potential for misuse. The international community must establish standards for gene-editing research to develop global ethical principles and prevent unethical researchers from moving to whichever country has the loosest regulations.” (Editing embryos for reproductive purposes is already illegal in the US.)

In addition, the resolution makes clear that the trio of senators “opposes the experiments that resulted in pregnancies using genome-edited human embryos”—referring to the revelation last fall that researcher He Jiankui had CRISPRed the genomes of two babies born in China.

The info is here.

The “Fake News” Effect: An Experiment on Motivated Reasoning and Trust in News

Michael Thaler
Harvard University
Originally published May 28, 2019

Abstract

When people receive information about controversial issues such as immigration policies, upward mobility, and racial discrimination, the information often evokes both what they currently believe and what they are motivated to believe. This paper theoretically and experimentally explores the importance in inference of this latter channel: motivated reasoning. In the theory of motivated reasoning this paper develops, people misupdate from information by treating their motivated beliefs as an extra signal. To test the theory, I create a new experimental design in which people make inferences about the veracity of news sources. This design is unique in that it identifies motivated reasoning from Bayesian updating and confirmation bias, and doesn’t require elicitation of people’s entire belief distribution. It is also very portable: In a large online experiment, I find the first identifying evidence for politically-driven motivated reasoning on eight different economic and social issues. Motivated reasoning leads people to become more polarized, less accurate, and more overconfident in their beliefs about these issues.

From the Conclusion:

One interpretation of this paper is unambiguously bleak: People of all demographics similarly motivatedly reason, do so on essentially every topic they are asked about, and make particularly biased inferences on issues they find important. However, there is an alternative interpretation: This experiment takes a step towards better understanding motivated reasoning, and makes it easier for future work to attenuate the bias. Using this experimental design, we can identify and estimate the magnitude of the bias; future projects that use interventions to attempt to mitigate motivated reasoning can use this estimated magnitude as an outcome variable. Since the bias does decrease utility in at least some settings, people may have demand for such interventions.

The research is here.

Tuesday, July 30, 2019

Is belief superiority justified by superior knowledge?

Michael P.Hall & Kaitlin T.Raimi
Journal of Experimental Social Psychology
Volume 76, May 2018, Pages 290-306

Abstract

Individuals expressing belief superiority—the belief that one's views are superior to other viewpoints—perceive themselves as better informed about that topic, but no research has verified whether this perception is justified. The present research examined whether people expressing belief superiority on four political issues demonstrated superior knowledge or superior knowledge-seeking behavior. Despite perceiving themselves as more knowledgeable, knowledge assessments revealed that the belief superior exhibited the greatest gaps between their perceived and actual knowledge. When given the opportunity to pursue additional information in that domain, belief-superior individuals frequently favored agreeable over disagreeable information, but also indicated awareness of this bias. Lastly, experimentally manipulated feedback about one's knowledge had some success in affecting belief superiority and resulting information-seeking behavior. Specifically, when belief superiority is lowered, people attend to information they may have previously regarded as inferior. Implications of unjustified belief superiority and biased information pursuit for political discourse are discussed.

The research is here.

Ethics In The Digital Age: Protect Others' Data As You Would Your Own

uncaptionedJeff Thomson
Forbes.com
Originally posted July 1, 2019

Here is an excerpt:

2. Ensure they are using people’s data with their consent. 

In theory, an increasing amount of rights to data use is willingly signed over by people through digital acceptance of privacy policies. But a recent investigation by the European Commission, following up on the impact of GDPR, indicated that corporate privacy policies remain too difficult for consumers to understand or even read. When analyzing the ethics of using data, finance professionals must personally reflect on whether the way information is being used is consistent with how consumers, clients or employees understand and expect it to be used. Furthermore, they should question if data is being used in a way that is necessary for achieving business goals in an ethical manner.

3. Follow the “golden rule” when it comes to data. 

Finally, finance professionals must reflect on whether they would want their own personal information being used to further business goals in the way that they are helping their organization use the data of others. This goes beyond regulations and the fine print of privacy agreements: it is adherence to the ancient, universal standard of refusing to do to other people what you would not want done to yourself. Admittedly, this is subjective and difficult to define. But finance professionals will be confronted with many situations in which there are no clear answers, and they must have the ability to think about the ethical implications of actions that might not necessarily be illegal.

The info is here.

Monday, July 29, 2019

Experts question the morality of creating human-monkey ‘chimeras’

Kay Vandette
www.earth.com
Originally published July 5, 2019

Earlier this year, scientists at the Kunming Institute of Zoology of the Chinese Academy of Sciences announced they had inserted a human gene into embryos that would become rhesus macaques, monkeys that share about 93 percent of their DNA with humans. The research, which was designed to give experts a better understanding of human brain development, has sparked controversy over whether this type of experimentation is ethical.

Some scientists believe that it is time to use human-monkey animals to pursue new insights into the progression of diseases such as Alzheimer’s. These genetically modified monkeys are referred to as chimeras, which are named after a mythical animal that consists of parts taken from various animals.

A resource guide on the science and ethics of chimeras written by Yale University researchers suggests that it is time to “cautiously” explore the creation of human-monkey chimeras.

“The search for a better animal model to stimulate human disease has been a ‘holy grail’ of biomedical research for decades,” the Yale team wrote in Chimera Research: Ethics and Protocols. “Realizing the promise of human-monkey chimera research in an ethically and scientifically appropriate manner will require a coordinated approach.”

A team of experts led by Dr. Douglas Munoz of Queen’s University has been studying the onset of Alzheimer’s disease in monkeys by using injections of beta-amyloid. The accumulation of this protein in the brain is believed to kill nerve cells and initiate the degenerative process.

The info is here.

AI Ethics – Too Principled to Fail?

Brent Mittelstadt
Oxford Internet Institute
https://ssrn.com/abstract=3391293

Abstract

AI Ethics is now a global topic of discussion in academic and policy circles. At least 63 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.

The paper is here.

Shift from professional ethics to business ethics

The outputs of many AI Ethics initiatives resemble professional codes of ethics that address design requirements and the behaviours and values of individual professions.  The legitimacy of particular applications and their underlying business interests remain largely unquestioned.  This approach conveniently steers debate towards the transgressions of unethical individuals, and away from the collective failure of unethical businesses and business models.  Developers will always be constrained by the institutions that employ them. To be truly effective, the ethical challenges of AI cannot conceptualised as individual failures. Going forward, AI Ethics must become an ethics of AI businesses as well.

Sunday, July 28, 2019

Community Standards of Deception

Levine, Emma
Booth School of Business
(June 17, 2019).
Available at SSRN: https://ssrn.com/abstract=3405538

Abstract

We frequently claim that lying is wrong, despite modeling that it is often right. The present research sheds light on this tension by unearthing systematic cases in which people believe lying is ethical in everyday communication and by proposing and testing a theory to explain these cases. Using both inductive and experimental approaches, I demonstrate that deception is perceived to be ethical, and individuals want to be deceived, when deception is perceived to prevent unnecessary harm. I identify nine implicit rules – pertaining to the targets of deception and the topic and timing of a conversation – that specify the systematic circumstances in which deception is perceived to cause unnecessary harm, and I document the causal effect of each implicit rule on the endorsement of deception. This research provides insight into when and why people value honesty, and paves the way for future research on when and why people embrace deception.

Saturday, July 27, 2019

An Obligation to Enhance?

Anton Vedder
Topoi 2019; 38 (1) pp. 49-52. Available at SSRN: https://ssrn.com/abstract=3407867

Abstract

This article discusses some rather formal characteristics of possible obligations to enhance. Obligations to enhance can exist in the absence of good moral reasons. If obligation and duty however are considered as synonyms, the enhancement involved must be morally desirable in some respect. Since enhancers and enhanced can, but need not coincide, advertency is appropriate regarding the question who exactly is addressed by an obligation or a duty to enhance: the person on whom the enhancing treatment is performed, or the controller or the operator of the enhancement. Especially, the position of the operator is easily overlooked. The exact functionality of the specific enhancement, is all-important, not only for the acceptability of a specific form of enhancement, but also for its chances of success for becoming a duty or morally obligatory. Finally and most importantly, however, since obligations can exist without good moral reasons, there can be obligations to enhance that are not morally right, let alone desirable.

From the Conclusion:

Obligations to enhance can exist in the presence and in the absence of good moral reasons for them. Obligations are based on preceding promises, agreements or regulatory arrangements; they do not necessarily coincide with moral duties. The existence of such obligations therefore need not be morally desirable. If obligation and duty are considered as synonyms, the enhancement involved must be morally desirable in some respect. Since enhancers and enhanced can, but need not coincide, advertency is appropriate regarding the question who exactly is addressed by an obligation or a duty to enhance: the person on whom the enhancing treatment is performed, or the controller or the operator of the enhancement? Especially, the position of the operator is easily overlooked. Finally, the exact functionality of the specific enhancement, is all-important, not only for the acceptability of a specific form of enhancement, but also for its chances of success for becoming a duty or morally obligatory. 

Friday, July 26, 2019

Dark Pathways to Achievement in Science: Researchers’ Achievement Goals Predict Engagement in Questionable Research Practices

Janke, S., Daumiller, M., & Rudert, S. C. (2019).
Social Psychological and Personality Science, 10(6), 783–791.

Abstract

Questionable research practices (QRPs) are a strongly debated topic in the scientific community. Hypotheses about the relationship between individual differences and QRPs are plentiful but have rarely been empirically tested. Here, we investigate whether researchers’ personal motivation (expressed by achievement goals) is associated with self-reported engagement in QRPs within a sample of 217 psychology researchers. Appearance approach goals (striving for skill demonstration) positively predicted engagement in QRPs, while learning approach goals (striving for skill development) were a negative predictor. These effects remained stable when also considering Machiavellianism, narcissism, and psychopathy in a latent multiple regression model. Additional moderation analyses revealed that the more researchers favored publishing over scientific rigor, the stronger the association between appearance approach goals and engagement in QRPs. The findings deliver first insights into the nature of the relationship between personal motivation and scientific malpractice.

The research can be found here.