Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Tuesday, February 4, 2020

Researchers: Are we on the cusp of an ‘AI winter’?

Sam Shead
bbc.com
Originally posted 12 Jan 20

Hype surrounding AI has peaked and troughed over the years as the abilities of the technology get overestimated and then re-evaluated.

The peaks are known as AI summers, and the troughs AI winters.

The 10s were arguably the hottest AI summer on record with tech giants repeatedly touting AI's abilities.

AI pioneer Yoshua Bengio, sometimes called one of the "godfathers of AI", told the BBC that AI's abilities were somewhat overhyped in the 10s by certain companies with an interest in doing so.

There are signs, however, that the hype might be about to start cooling off.

"I have the sense that AI is transitioning to a new phase," said Katja Hoffman, a principal researcher at Microsoft Research in Cambridge.

Given the billions being invested in AI and the fact that there are likely to be more breakthroughs ahead, some researchers believe it would be wrong to call this new phase an AI winter.

The info is here.

Bounded awareness: Implications for ethical decision making

Max H. Bazerman and Ovul Sezer
Organizational Behavior and Human Decision Processes
Volume 136, September 2016, Pages 95-105

Abstract

In many of the business scandals of the new millennium, the perpetrators were surrounded by people who could have recognized the misbehavior, yet failed to notice it. To explain such inaction, management scholars have been developing the area of behavioral ethics and the more specific topic of bounded ethicality—the systematic and predictable ways in which even good people engage in unethical conduct without their own awareness. In this paper, we review research on both bounded ethicality and bounded awareness, and connect the two areas to highlight the challenges of encouraging managers and leaders to notice and act to stop unethical conduct. We close with directions for future research and suggest that noticing unethical behavior should be considered a critical leadership skill.

Bounded Ethicality

Within the broad topic of behavioral ethics is the much more specific topic of bounded ethicality (Chugh, Banaji, & Bazerman, 2005). Chugh et al. (2005) define bounded ethicality as the psychological processes that lead people to engage in ethically questionable behaviors that are inconsistent with their own preferred ethics. That is, if they were more reflective about their choices, they would make a different decision. This definition runs parallel to the concepts of bounded rationality (March & Simon, 1958) and bounded awareness (Chugh & Bazerman, 2007). In all three cases, a cognitive shortcoming keeps the actor from taking the action that she would choose with greater awareness. Importantly, if people overcame these boundaries, they would make decisions that are more in line with their ethical standards. Note that behavioral ethicists do not ask decision makers to follow particular values or rules, but rather try to help decision makers adhere more closely
to their own personal values with greater reflection.

The paper can be downloaded here.

Monday, February 3, 2020

Explaining moral behavior: A minimal moral model.

Osman, M., & Wiegmann, A.
Experimental Psychology (2017)
64(2), 68-81.

Abstract

In this review we make a simple theoretical argument which is that for theory development, computational modeling, and general frameworks for understanding moral psychology researchers should build on domain-general principles from reasoning, judgment, and decision-making research. Our approach is radical with respect to typical models that exist in moral psychology that tend to propose complex innate moral grammars and even evolutionarily guided moral principles. In support of our argument we show that by using a simple value-based decision model we can capture a range of core moral behaviors. Crucially, the argument we propose is that moral situations per se do not require anything specialized or different from other situations in which we have to make decisions, inferences, and judgments in order to figure out how to act.

From the Implications section:

If instead moral behavior is viewed as a domain-general process, the findings can easily be accounted for based on existing literature from judgment and decision-making research such as Tversky’s (1969) work on intransitive preferences.

The same benefits of this research approach extend to the moral philosophy domain. As we described at the beginning of the paper, empirical research can inform philosophers as to which moral intuitions are likely to be biased. If moral judgments, decisions, and behavior can be captured by well-developed domain-general theories then our theoretical and empirical resources for gaining  knowledge about moral intuitions would be much greater, as compared to the recourses provided by moral psychology alone.

The paper can be downloaded here.

Buddhist Ethics

Maria Heim
Elements in Ethics
DOI: 10.1017/9781108588270
First published online: January 2020

Abstract

“Ethics” was not developed as a separate branch of philosophy in Buddhist traditions until the modern period, though Buddhist philosophers have always been concerned with the moral significance of thoughts, emotions, intentions, actions, virtues, and precepts. Their most penetrating forms of moral reflection have been developed within disciplines of practice aimed at achieving freedom and peace. This Element first offers a brief overview of Buddhist thought and modern scholarly approaches to its diverse forms of moral reflection. It then explores two of the most prominent philosophers from the main strands of the Indian Buddhist tradition – Buddhaghosa and Śāntideva – in a comparative fashion.

The info is here.

Sunday, February 2, 2020

Empirical Work in Moral Psychology

 Joshua May
Routledge Encyclopedia of Philosophy

How do we form our moral judgments, and how do they influence behavior? What ultimately motivates kind versus malicious action? Moral psychology is the interdisciplinary study of such questions about the mental lives of moral agents, including moral thought, feeling, reasoning, and motivation. While these questions can be studied solely from the armchair or using only empirical tools, researchers in various disciplines, from biology to neuroscience to philosophy, can address them in tandem. Some key topics in this respect revolve around moral cognition and motivation, such as moral responsibility, altruism, the structure of moral motivation, weakness of will, and moral intuitions. Of course there are other important topics as well, including emotions, character, moral development, self-deception, addiction, well-being, and the evolution of moral capacities.

Some of the primary objects of study in moral psychology are the processes driving moral action. For example, we think of ourselves as possessing free will; as being responsible for what we do; as capable of self-control; and as capable of genuine concern for the welfare of others. Such claims can be tested by empirical methods to some extent in at least two ways. First, we can determine what in fact our ordinary thinking is. While many philosophers investigate this through rigorous reflection on concepts, we can also use the empirical methods of the social sciences. Second, we can investigate empirically whether our ordinary thinking is correct or illusory. For example, we can check the empirical adequacy of philosophical theories, assessing directly any claims made about how we think, feel, and behave.

Understanding the psychology of moral individuals is certainly interesting in its own right, but it also often has direct implications for other areas of ethics, such as metaethics and normative ethics. For instance, determining the role of reason versus sentiment in moral judgment and motivation can shed light on whether moral judgments are cognitive, and perhaps whether morality itself is in some sense objective. Similarly, evaluating moral theories, such as deontology and utilitarianism, often relies on intuitive judgments about what one ought to do in various hypothetical cases. Empirical research can again serve as a tool to determine what exactly our intuitions are and which psychological processes generate them, contributing to a rigorous evaluation of the warrant of moral intuitions.

The paper can be downloaded here.

Saturday, February 1, 2020

Bringing Ethics Back To Business

Tamara Pupic
entrepreneur.com
Originally posted 30 Dec 19

In the business world, detecting, preventing, and remedying compliance issues, or a lack thereof, has evolved from academic research, investigative reporting, and businesses applying best practice initiatives, often clumsily, into a niche sector - regtech,  a new sector for ‘treps to develop innovative technologies to address challenges involving regulations.

It is considered the most promising part of the global enterprise governance, risk, and compliance (EGRC) market, whose size has grown rapidly, from US$27.8 billion in 2018 to an expected $64.2 billion by 2025, according to a report by Grand View Research. In the MENA region, transparency and ethical compliance have been at the forefront of shareholder and board of directors’ discussions, especially since non-compliance cases at leading firms have started making headlines just about every other week.

(cut)

According to the leadership team, Alethia solves several of the main current challenges in compliance. Firstly, it addresses the lack of anonymity in traditional compliance hotlines and emails “People are naturally skeptical when it comes to technology and personal data,” Roets says. “We instill confidence by requiring no personal information when downloading the app, and we don’t track IP addresses. All interactions are protected with SSL encryption using digitally signed tokens to ensure 100% anonymity for the whistleblower to safeguard against any form of retaliation.” Secondly, the app urges organizations to try different reporting channels. “Most still rely on outdated anonymous telephone hotlines, but in a digital world, when we think about workforce demographics, GDPR compliance, cost implications, and the overall decline in telephone usage, hotlines are no longer best practice,” Roets says. “Other channels include intranet solutions, cumbersome online forms, or personal interactions with HR or ombudsmen. Unfortunately, these offer little by way of a follow-up feature, call handlers’ subjectivity can impact the quality of reports, and most importantly, they all present a real or perceived threat of compromising the reporter’s identity.”

The info is here.

Friday, January 31, 2020

Most scientists 'can't replicate studies by their peers'

Test tubesTom Feilden
BBC.com
Originally posted 22 Feb 17

Here is an excerpt:

The authors should have done it themselves before publication, and all you have to do is read the methods section in the paper and follow the instructions.

Sadly nothing, it seems, could be further from the truth.

After meticulous research involving painstaking attention to detail over several years (the project was launched in 2011), the team was able to confirm only two of the original studies' findings.

Two more proved inconclusive and in the fifth, the team completely failed to replicate the result.

"It's worrying because replication is supposed to be a hallmark of scientific integrity," says Dr Errington.

Concern over the reliability of the results published in scientific literature has been growing for some time.

According to a survey published in the journal Nature last summer, more than 70% of researchers have tried and failed to reproduce another scientist's experiments.

Marcus Munafo is one of them. Now professor of biological psychology at Bristol University, he almost gave up on a career in science when, as a PhD student, he failed to reproduce a textbook study on anxiety.

"I had a crisis of confidence. I thought maybe it's me, maybe I didn't run my study well, maybe I'm not cut out to be a scientist."

The problem, it turned out, was not with Marcus Munafo's science, but with the way the scientific literature had been "tidied up" to present a much clearer, more robust outcome.

The info is here.

Strength of conviction won’t help to persuade when people disagree

Brain areaPressor
ucl.ac.uk
Originally poste 16 Dec 19

The brain scanning study, published in Nature Neuroscience, reveals a new type of confirmation bias that can make it very difficult to alter people’s opinions.

“We found that when people disagree, their brains fail to encode the quality of the other person’s opinion, giving them less reason to change their mind,” said the study’s senior author, Professor Tali Sharot (UCL Psychology & Language Sciences).

For the study, the researchers asked 42 participants, split into pairs, to estimate house prices. They each wagered on whether the asking price would be more or less than a set amount, depending on how confident they were. Next, each lay in an MRI scanner with the two scanners divided by a glass wall. On their screens they were shown the properties again, reminded of their own judgements, then shown their partner’s assessment and wagers, and finally were asked to submit a final wager.

The researchers found that, when both participants agreed, people would increase their final wagers to larger amounts, particularly if their partner had placed a high wager.

Conversely, when the partners disagreed, the opinion of the disagreeing partner had little impact on people’s wagers, even if the disagreeing partner had placed a high wager.

The researchers found that one brain area, the posterior medial prefrontal cortex (pMFC), was involved in incorporating another person’s beliefs into one’s own. Brain activity differed depending on the strength of the partner’s wager, but only when they were already in agreement. When the partners disagreed, there was no relationship between the partner’s wager and brain activity in the pMFC region.

The info is here.

Thursday, January 30, 2020

Body Maps of Moral Concerns

Atari, M., Davani, A. M., & Dehghani, M.
(2018, December 4).
https://doi.org/10.31234/osf.io/jkewf

Abstract

The somatosensory reaction to different social circumstances has been proposed to trigger conscious emotional experiences. Here, we present a pre-registered experiment in which we examine the topographical maps associated with violations of different moral concerns. Specifically, participants (N = 596) were randomly assigned to scenarios of moral violations, and then drew their subjective somatosensory experience on two 48,954-pixel silhouettes. We demonstrate that bodily representations of different moral violations are slightly different. Further, we demonstrate that violations of moral concerns are felt in different parts of the body, and arguably result in different somatosensory experiences for liberals and conservatives. We also investigate how individual differences in moral concerns relate to bodily maps of moral violations. Finally, we use natural language processing to predict activation in body parts based on the semantic representation of textual stimuli. The findings shed light on the complex relationships between moral violations and somatosensory experiences.