Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, December 9, 2021

'Moral molecules’ – a new theory of what goodness is made of

Oliver Scott Curry and others
www.psyche.com
Originally posted 1 NOV 21

Here are two excerpts:

Research is converging on the idea that morality is a collection of rules for promoting cooperation – rules that help us work together, get along, keep the peace and promote the common good. The basic idea is that humans are social animals who have lived together in groups for millions of years. During this time, we have been surrounded by opportunities for cooperation – for mutually beneficial social interaction – and we have evolved and invented a range of ways of unlocking these benefits. These cooperative strategies come in different shapes and sizes: instincts, intuitions, inventions, institutions. Together, they motivate our cooperative behaviour and provide the criteria by which we evaluate the behaviour of others. And it is these cooperative strategies that philosophers and others have called ‘morality’.

This theory of ‘morality as cooperation’ relies on the mathematical analysis of cooperation provided by game theory – the branch of maths that is used to describe situations in which the outcome of one’s decisions depends on the decisions made by others. Game theory distinguishes between competitive ‘zero-sum’ interactions or ‘games’, where one player’s gain is another’s loss, and cooperative ‘nonzero-sum’ games, win-win situations in which both players benefit. What’s more, game theory tells us that there is not just one type of nonzero-sum game; there are many, with many different cooperative strategies for playing them. At least seven different types of cooperation have been identified so far, and each one explains a different type of morality.

(cut)

Hence, seven types of cooperation explain seven types of morality: love, loyalty, reciprocity, heroism, deference, fairness and property rights. And so, according to this theory, it is morally good to: 1) love your family; 2) be loyal to your group; 3) return favours; 4) be heroic; 5) defer to superiors; 6) be fair; and 7) respect property. (And it is morally bad to: 1) neglect your family; 2) betray your group; 3) cheat; 4) be a coward; 5) disrespect authority; 6) be unfair; or 7) steal.) These morals are evolutionarily ancient, genetically distinct, psychologically discrete and cross-culturally universal.

The theory of ‘morality as cooperation’ explains, from first principles, many of the morals on those old lists. Some of the morals correspond to one of the basic types of cooperation (as in the case of courage), while others correspond to component parts of a basic type (as in the case of gratitude, which is a component of reciprocity).

Wednesday, December 8, 2021

Robot Evolution: Ethical Concerns

Eiban, A.E., Ellers, J, et al.
Front. Robot. AI, 03 November 2021

Abstract

Rapid developments in evolutionary computation, robotics, 3D-printing, and material science are enabling advanced systems of robots that can autonomously reproduce and evolve. The emerging technology of robot evolution challenges existing AI ethics because the inherent adaptivity, stochasticity, and complexity of evolutionary systems severely weaken human control and induce new types of hazards. In this paper we address the question how robot evolution can be responsibly controlled to avoid safety risks. We discuss risks related to robot multiplication, maladaptation, and domination and suggest solutions for meaningful human control. Such concerns may seem far-fetched now, however, we posit that awareness must be created before the technology becomes mature.

Conclusion

Robot evolution is not science fiction anymore. The theory and the algorithms are available and robots are already evolving in computer simulations, safely limited to virtual worlds. In the meanwhile, the technology for real-world implementations is developing rapidly and the first (semi-) autonomously reproducing and evolving robots are likely to arrive within a decade (Hale et al., 2019; Buchanan et al., 2020). Current research in this area is typically curiosity-driven, but will increasingly become more application-oriented as evolving robot systems can be employed in hostile or inaccessible environments, like seafloors, rain-forests, ultra-deep mines or other planets, where they develop themselves “on the job” without the need for direct human oversight.

A key insight of this paper is that the practice of second order engineering, as induced by robot evolution, raises new issues outside the current discourse on AI and robot ethics. Our main message is that awareness must be created before the technology becomes mature and researchers and potential users should discuss how robot evolution can be responsibly controlled. Specifically, robot evolution needs careful ethical and methodological guidelines in order to minimize potential harms and maximize the benefits. Even though the evolutionary process is functionally autonomous without a “steering wheel” it still entails a necessity to assign responsibilities. This is crucial not only with respect to holding someone responsible if things go wrong, but also to make sure that people take responsibility for certain aspects of the process–without people taking responsibility, the process cannot be effectively controlled. Given the potential benefits and harms and the complicated control issues, there is an urgent need to follow up our ideas and further think about responsible robot evolution.

Tuesday, December 7, 2021

Memory and decision making interact to shape the value of unchosen options

Biderman, N., Shohamy, D.
Nat Commun 12, 4648 (2021). 
https://doi.org/10.1038/s41467-021-24907-x

Abstract

The goal of deliberation is to separate between options so that we can commit to one and leave the other behind. However, deliberation can, paradoxically, also form an association in memory between the chosen and unchosen options. Here, we consider this possibility and examine its consequences for how outcomes affect not only the value of the options we chose, but also, by association, the value of options we did not choose. In five experiments (total n = 612), including a preregistered experiment (n = 235), we found that the value assigned to unchosen options is inversely related to their chosen counterparts. Moreover, this inverse relationship was associated with participants’ memory of the pairs they chose between. Our findings suggest that deciding between options does not end the competition between them. Deliberation binds choice options together in memory such that the learned value of one can affect the inferred value of the other.

From the Discussion

We found that stronger memory for the deliberated options is related to a stronger discrepancy between the value assigned to the chosen and unchosen options. This result suggests that choosing between options leaves a memory trace. By definition, deliberation is meant to tease apart the value of competing options in the service of making the decision; our findings suggest that deliberation and choice also bind pairs of choice options in memory. Consequently, unchosen options do not vanish from memory after a decision is made, but rather they continue to linger through their link to the chosen options.

We show that participants use the association between choice options to infer the value of unchosen options. This finding complements and extends previous studies reporting transfer of value between associated items in the same direction, which allows agents to generalize reward value across associated exemplars. For example, in the sensory preconditioning task, pairs of neutral items are associated by virtue of appearing in temporal proximity. Subsequently, just one item gains feedback—it is either rewarded or not. When probed to choose between items that did not receive feedback, participants tend to select those previously paired with rewarded items. In contrast, our participants tended to avoid the items whose counterpart was previously rewarded. Put in learning terms, when the chosen option proved to be successful, participants’ choices in our task reflected avoidance of, rather than approach to, the unchosen option. One important difference between our task and the sensory preconditioning task is the manner in which the association is formed. In both tasks a pair of items appears in close temporal proximity, yet in our task participants are also asked to decide between these items and the act of deliberation seems to result in an inverse association between the deliberated options.



Monday, December 6, 2021

Overtrusting robots: Setting a research agenda to mitigate overtrust in automation

Aroyo, A.M.,  et al. (2021).
Journal of Behavioral Robotics,
Vol. 12, no. 1, pp. 423-436. 

Abstract

There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human–robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI.

From the Conclusion

In light of the increasing use of automated systems, both embodied and disembodied, overtrust is becoming an ever more important topic. However, our overview shows how the overtrust literature has so far been mostly confined to HRI research and psychological approaches. While philosophers, ethicists, engineers, lawyers, and social scientists more generally have a lot to say about trust and technology, conceptual clarity and a shared framework for approaching overtrust are missing. In this article, our goal was not to provide an overarching framework but rather to encourage further dialogue from an interdisciplinary perspective, integrating diverse literature and providing a ground for common understanding. 

Sunday, December 5, 2021

The psychological foundations of reputation-based cooperation

Manrique, H., et al. (2021, June 2).
https://doi.org/10.1098/rstb.2020.0287

Abstract

Humans care about having a positive reputation, which may prompt them to help in scenarios where the return benefits are not obvious. Various game-theoretical models support the hypothesis that concern for reputation may stabilize cooperation beyond kin, pairs or small groups. However, such models are not explicit about the underlying psychological mechanisms that support reputation-based cooperation. These models therefore cannot account for the apparent rarity of reputation-based cooperation in other species. Here we identify the cognitive mechanisms that may support reputation-based cooperation in the absence of language. We argue that a large working memory enhances the ability to delay gratification, to understand others' mental states (which allows for perspective-taking and attribution of intentions), and to create and follow norms, which are key building blocks for increasingly complex reputation-based cooperation. We review the existing evidence for the appearance of these processes during human ontogeny as well as their presence in non-human apes and other vertebrates. Based on this review, we predict that most non-human species are cognitively constrained to show only simple forms of reputation-based cooperation.

Discussion

We have presented  four basic psychological building blocks that we consider important facilitators for complex reputation-based cooperation: working memory, delay of gratification, theory of mind, and social norms. Working memory allows for parallel processing of diverse information, to  properly  assess  others’ actions and update their  reputation  scores. Delay of gratification is useful for many types of cooperation,  but may  be particularly relevant for reputation-based cooperation where the returns come from a future interaction with an observer rather than an immediate reciprocation by one’s current partner. Theory of mind makes it easier to  properly  assess others’ actions, and  reduces the  risk that spreading  errors will undermine cooperation. Finally, norms support theory of mind by giving individuals a benchmark of what is right or wrong.  The more developed that each of these building blocks is, the more complex the interaction structure can become. We are aware that by picking these four socio-cognitive mechanisms we leave out other processes that might be involved, e.g. long-term memory, yet we think the ones we picked are more critical and better allow for comparison across species.

Saturday, December 4, 2021

Virtuous Victims

Jordan, Jillian J., and Maryam Kouchaki
Science Advances 7, no. 42 (October 15, 2021).

Abstract

How do people perceive the moral character of victims? We find, across a range of transgressions, that people frequently see victims of wrongdoing as more moral than nonvictims who have behaved identically. Across 17 experiments (total n = 9676), we document this Virtuous Victim effect and explore the mechanisms underlying it. We also find support for the Justice Restoration Hypothesis, which proposes that people see victims as moral because this perception serves to motivate punishment of perpetrators and helping of victims, and people frequently face incentives to enact or encourage these “justice-restorative” actions. Our results validate predictions of this hypothesis and suggest that the Virtuous Victim effect does not merely reflect (i) that victims look good in contrast to perpetrators, (ii) that people are generally inclined to positively evaluate those who have suffered, or (iii) that people hold a genuine belief that victims tend to be people who behave morally.

Discussion

Across 17 experiments (total n = 9676), we have documented and explored the Virtuous Victim effect. We find that victims are frequently seen as more virtuous than nonvictims—not because of their own behavior, but because others have mistreated them. We observe this effect across a range of moral transgressions and find evidence that it is not moderated by the victim’s (white versus black) race or gender. Humans ubiquitously—and perhaps increasingly (1, 2)—encounter narratives about immoral acts and their victims. By demonstrating that these narratives have the power to confer moral status, our results shed new light on the ways that victims are perceived by society.

We have also explored the boundaries of the Virtuous Victim effect and illuminated the mechanisms that underlie it. For example, we find that the Virtuous Victim effect may be especially likely to flow from victim narratives that describe a transgression’s perpetrator and are presented by a third-person narrator (or perhaps, more generally, a narrator who is unlikely to be doubted). We also find that the effect is specific to victims of immorality (i.e., it does not extend to accident victims) and to moral virtue (i.e., it does not extend equally to positive but nonmoral traits). Furthermore, the effect shapes perceptions of moral character but not predictions about moral behavior.

We have also evaluated several potential explanations for the Virtuous Victim effect. Ultimately, our results provide evidence for the Justice Restoration Hypothesis, which proposes that people see victims as virtuous because this perception serves to motivate punishment of perpetrators and helping of victims, and people frequently face incentives to enact or encourage these justice-restorative actions.

Friday, December 3, 2021

A rational reinterpretation of dual-process theories

S. Milli, F. Lieder, & T. L. Griffiths
Cognition
Volume 217, December 2021, 104881

Abstract

Highly influential “dual-process” accounts of human cognition postulate the coexistence of a slow accurate system with a fast error-prone system. But why would there be just two systems rather than, say, one or 93? Here, we argue that a dual-process architecture might reflect a rational tradeoff between the cognitive flexibility afforded by multiple systems and the time and effort required to choose between them. We investigate what the optimal set and number of cognitive systems would be depending on the structure of the environment. We find that the optimal number of systems depends on the variability of the environment and the difficulty of deciding when which system should be used. Furthermore, we find that there is a plausible range of conditions under which it is optimal to be equipped with a fast system that performs no deliberation (“System 1”) and a slow system that achieves a higher expected accuracy through deliberation (“System 2”). Our findings thereby suggest a rational reinterpretation of dual-process theories.

From the General Discussion

While we have formulated the function of selecting between multiple cognitive systems as metareasoning, this does not mean that the mechanisms through which this function is realized have to involve any form
of reasoning. Rather, our analysis holds for all selection and arbitration mechanisms as having more cognitive systems incurs a higher cognitive cost. This also applies to model-free mechanisms that choose decision systems based on learned associations. This is because the more actions there are, the longer it takes for model-free reinforcement learning to converge to a good solution and the suboptimal choices during the learning phase can be costly.

The emerging connection between normative modeling and dual-process theories is remarkable because the findings from these approaches are often invoked to support opposite views on human (ir)rationality (Stanovich, 2011). In this debate, some authors (Ariely, 2009; Marcus, 2009) have interpreted the existence of a fast, error-prone cognitive system whose heuristics violate the rules of logic, probability theory, and expected utility theory as a sign of human irrationality.  By contrast, our analysis suggests that having a fast but fallible cognitive system in addition to a slow but accurate system might be the best
possible solution. This implies that the variability, fallibility, and inconsistency of human judgment that result from people’s switching between System 1 and System 2 should not be interpreted as evidence
for human irrationality, because it might reflect the rational use of limited cognitive resources. 

Thursday, December 2, 2021

The globalizability of temporal discounting

Ruggeri, K., Panin, A., et al. (2021, October 1). 
psyarxiv.com
https://doi.org/10.31234/osf.io/2enfz

Abstract

Economic inequality is associated with extreme rates of temporal discounting, which is a behavioral pattern where individuals choose smaller, immediate financial gains over larger, delayed gains. Such patterns may feed into rising global inequality, yet it is unclear if they are a function of choice preferences or norms, or rather absence of sufficient resources to meet immediate needs. It is also not clear if these reflect true differences in choice patterns between income groups. We test temporal discounting and five intertemporal choice anomalies using local currencies and value standards in 61 countries. Across a diverse sample of 13,629 participants, we found highly consistent rates of choice anomalies. Individuals with lower incomes were not significantly different, but economic inequality and broader financial circumstances impact population choice patterns.

Technical Abstract

Economic inequality is associated with extreme rates of temporal discounting, which is a behavioral pattern  where  individuals  choose  smaller,  immediate  financial  gains  over larger, delayed gains. Such patterns may feed into rising global inequality, yet it is unclear if  they are a function of choice preferences or norms, or rather absence of sufficient resources to meet immediate needs. It is also not clear if these reflect true differences in choice  patterns  between  income  groups.  We  test  temporal  discounting and  five intertemporal choice anomalies using local currencies and value standards in 61 countries. Across a diverse sample of 13,629 participants, we found highly consistent rates choice anomalies. Individuals with lower incomes were not significantly different, but economic inequality and broader financial circumstances impact population choice patterns.


Bottom line: This research refutes the perspective that low-income individuals are poor decision-makers.

Wednesday, December 1, 2021

‘Yeah, we’re spooked’: AI starting to have big real-world impact

Nicola K. Davis
The Guardian
Originally posted 29 OCT 21

Here is an excerpt:

One concern is that a machine would not need to be more intelligent than humans in all things to pose a serious risk. “It’s something that’s unfolding now,” he said. “If you look at social media and the algorithms that choose what people read and watch, they have a huge amount of control over our cognitive input.”

The upshot, he said, is that the algorithms manipulate the user, brainwashing them so that their behaviour becomes more predictable when it comes to what they chose to engage with, boosting click-based revenue.

Have AI researchers become spooked by their own success? “Yeah, I think we are increasingly spooked,” Russell said.

“It reminds me a little bit of what happened in physics where the physicists knew that atomic energy existed, they could measure the masses of different atoms, and they could figure out how much energy could be released if you could do the conversion between different types of atoms,” he said, noting that the experts always stressed the idea was theoretical. “And then it happened and they weren’t ready for it.”

The use of AI in military applications – such as small anti-personnel weapons – is of particular concern, he said. “Those are the ones that are very easily scalable, meaning you could put a million of them in a single truck and you could open the back and off they go and wipe out a whole city,” said Russell.

Russell believes the future for AI lies in developing machines that know the true objective is uncertain, as are our preferences, meaning they must check in with humans – rather like a butler – on any decision. But the idea is complex, not least because different people have different – and sometimes conflicting – preferences, and those preferences are not fixed.

Russell called for measures including a code of conduct for researchers, legislation and treaties to ensure the safety of AI systems in use, and training of researchers to ensure AI is not susceptible to problems such as racial bias. He said EU legislation that would ban impersonation of humans by machines should be adopted around the world.