Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, September 3, 2021

What is consciousness, and could machines have it?

S. Dahaene, H. Lau, & S. Kouider
Science  27 Oct 2017:
Vol. 358, Issue 6362, pp. 486-492

Abstract

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

From Concluding remarks

Our stance is based on a simple hypothesis: What we call “consciousness” results from specific types of information-processing computations, physically realized by the hardware of the brain. It differs from other theories in being resolutely computational; we surmise that mere information-theoretic quantities do not suffice to define consciousness unless one also considers the nature and depth of the information being processed.

We contend that a machine endowed with C1 and C2 would behave as though it were conscious; for instance, it would know that it is seeing something, would express confidence in it, would report it to others, could suffer hallucinations when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans. Still, such a purely functional definition of consciousness may leave some readers unsatisfied. Are we “over-intellectualizing” consciousness, by assuming that some high-level cognitive functions are necessarily tied to consciousness? Are we leaving aside the experiential component (“what it is like” to be conscious)? Does subjective experience escape a computational definition?

Although those philosophical questions lie beyond the scope of the present paper, we close by noting that empirically, in humans the loss of C1 and C2 computations covaries with a loss of subjective experience. 

Thursday, September 2, 2021

Reconciling scientific and commonsense values to improve reasoning

C. Cusimano & T. Lombrozo
Trends in Cognitive Sciences
Available online July 2021

Abstract

Scientific reasoning is characterized by commitments to evidence and objectivity. New research suggests that under some conditions, people are prone to reject these commitments, and instead sanction motivated reasoning and bias. Moreover, people’s tendency to devalue scientific reasoning likely explains the emergence and persistence of many biased beliefs. However, recent work in epistemology has identified ways in which bias might be legitimately incorporated into belief formation. Researchers can leverage these insights to evaluate when commonsense affirmation of bias is justified and when it is unjustified and therefore a good target for intervention.

Highlights
  • People espouse a ‘lay ethics of belief’ that defines standards for how beliefs should be evaluated and formed.
  • People vary in the extent to which they endorse scientific norms of reasoning, such as evidentialism and impartiality, in their own norms of belief. In some cases, people sanction motivated or biased thinking.
  • Variation in endorsement of scientific norms predicts belief accuracy, suggesting that interventions that target norms could lead to more accurate beliefs.
  • Normative theories in epistemology vary in whether, and how, they regard reasoning and belief formation as legitimately impacted by moral or pragmatic considerations.
  • Psychologists can leverage knowledge of people’s lay ethics of belief, and normative arguments about when and whether bias is appropriate, to develop interventions to improve reasoning that are both ethical and effective.

Concluding remarks

It is no secret that humans are biased reasoners. Recent work suggests that these departures from scientific reasoning are not simply the result of unconscious bias, but are also a consequence of endorsing norms for belief that place personal, moral, or social good above truth.  The link between devaluing the ‘scientific ethos’ and holding biased beliefs suggests that, in some cases, interventions on the perceived value of scientific reasoning could lead to better reasoning and to better outcomes. In this spirit, we have offered a strategy for value debiasing.

Wednesday, September 1, 2021

The Dynamics of Inattention in the (Baseball) Field

J. E. Archsmith, et al. 
IZA Institute of Labor Economics
June 2021

Abstract

Recent theoretical and empirical work characterizes attention as a limited resource that decision-makers strategically allocate. There has been less research on the dynamic interdependence of attention: how paying attention now may affect performance later. In this paper, we exploit high-frequency data on decision-making by Major League Baseball umpires to examine this. We find that umpires not only apply greater effort to higher stakes decisions, but also that effort applied to earlier decisions increases errors later. These findings are consistent with the umpire having a depletable ‘budget’ of attention. There is no such dynamic interdependence after breaks during the game (at the end of each inning) suggesting that even short rest periods can replenish attention budgets. We also find that an expectation of higher stakes future decisions leads to reduced attention to current decisions, consistent with forward-looking behavior by umpires aware of attention scarcity.

Conclusions

Conventional economic models embody agents able to make perfect, optimising decisions.  An important strand of recent efforts to increase the behavioral realism of models has been to acknowledge that attention is not costless---the effort required to attend to decisions and execute them well can be costly and cognitively tiring---and incorporate that in models. Models of “strategic inattention”, predicated on rational agents adjusting their behavior to account for attention being either limited and/or costly, are increasingly mainstream (for examples Caplin and Dean, 2015; Sims, 2003; Falkinger, 2011).

While the idea of costly attention is intuitively appealing, rigorous evidence characterizing its implications in real settings remains limited and primarily focuses on static effects in cross sectional data. This paper adds to and extends this evidence. Studying the quality of decisions made by a panel of professional decision-makers with strong incentives to get these decisions right, we show that MLB umpires systematically vary the effort they apply to individual decisions: applying greater attention to those associated with higher stakes. This is consistent with established theoretical models of strategic inattention. Our data-rich setting, in which the same umpire is called upon to issue a long series of decisions, allows for careful study of the dynamics of inattention and delivers our most novel results.

Tuesday, August 31, 2021

What Causes Unethical Behavior? A Meta-Analysis to Set an Agenda for Public Administration Research

Nicola Belle & Paola Cantarelli
(2017)
Public Administration Review,
Vol. 77, Iss. 3, pp. 327–339

Abstract

This article uses meta-analysis to synthesize 137 experiments in 73 articles on the causes of unethical behavior. Results show that exposure to in-group members who misbehave or to others who benefit from unethical actions, greed, egocentrism, self-justification, exposure to incremental dishonesty, loss aversion, challenging performance goals, or time pressure increase unethical behavior. In contrast, monitoring of employees, moral reminders, and individuals’ willingness to maintain a positive self-view decrease unethical conduct. Findings on the effect of self-control depletion on unethical behavior are mixed. Results also present subgroup analyses and several measures of study heterogeneity and likelihood of publication bias. The implications are of interest to both scholars and practitioners. The article concludes by discussing which of the factors analyzed should gain prominence in public administration research and uncovering several unexplored causes of unethical behavior.

From the Discussion

Among the factors that our meta-analyses identified as determinants of unethical behavior, the following may be elevated to prominence for public administration research and practice. First, results from the meta-analyses on social influences suggest that being exposed to corrupted colleagues may enhance the likelihood that one engages in unethical conduct. These findings are particularly relevant because “[c]orruption in the public sector hampers the efficiency of public services, undermines confidence in public institutions and increases the cost of public transactions” (OECD 2015 ). Moreover, corruption “may distort government ’ s public resource allocations” (Liu and Mikesell 2014 , 346). 

Monday, August 30, 2021

Generosity pays: Selfish people have fewer children and earn less money.

Eriksson, K., Vartanova, I., et al.
(2020). Journal of Personality and Social 
Psychology, 118(3), 532–544. 

Abstract

Does selfishness pay in the long term? Previous research has indicated that being prosocial (or otherish) rather than selfish has positive consequences for psychological well-being, physical health, and relationships. Here we instead examine the consequences for individuals’ incomes and number of children, as these are the currencies that matter most in theories that emphasize the power of self-interest, namely economics and evolutionary thinking. Drawing on both cross-sectional (Studies 1 and 2) and panel data (Studies 3 and 4), we find that prosocial individuals tend to have more children and higher income than selfish individuals. An additional survey (Study 5) of lay beliefs about how self-interest impacts income and fertility suggests one reason selfish people may persist in their behavior even though it leads to poorer outcomes: people generally expect selfish individuals to have higher incomes. Our findings have implications for lay decisions about the allocation of scarce resources, as well as for economic and evolutionary theories of human behavior. 

From the General Discussion

Our findings also speak to theories of the evolutionary history of otherishness in humans. It is often assumed that evolution promotes selfishness unless group selection acts as a counter-force (Sober & Wilson, 1999), possibly combined with a punishment mechanism to offset the advantage of being selfish (Henrich & Boyd, 2001). The finding that otherishness is associated with greater fertility within populations indicates that selfishness is not necessarily advantageous in the first place. Our datasets are limited to Europe and the United States, but if the mechanisms we sketched above are correct then we should also expect a similarly positive effect of otherishness on fertility in other parts of the world.

Our results paint a more complex picture for income, compared to fertility. Whereas otherish people tended to show the largest increases in incomes over time, the majority of our studies indicated that the highest absolute levels of income were associated with moderate otherishness. There are several ways in which otherishness may influence income levels and income trajectories. As noted earlier, otherish people tend to have stronger relations and social networks, and social networks are a key source of information about job opportunities (Granovetter, 1995).

Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?

Lara, F. 
Sci Eng Ethics 27, 42 (2021). 
https://doi.org/10.1007/s11948-021-00318-5

Abstract

Can Artificial Intelligence (AI) be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these proposals, this article proposes a virtual assistant that, through dialogue, neutrality and virtual reality technologies, can teach users to make better moral decisions on their own. The author concludes that, as long as certain precautions are taken in its design, such an assistant could do this better than a human instructor adopting the same educational methodology.

From the Conclusion

The key in moral education is that it be pursued while respecting and promoting personal autonomy. Educators should avoid the mistake of limiting the capacities of individuals to freely and reflectively determine their own values by attempting to enhance their behaviour directly. On the contrary, they must do what they can to ensure that those being educated, at least at an advanced age, actively participate in this process in order to assume the values that will define them and give meaning to their lives. The problem with current proposals for moral enhancement through new technologies is that they treat the subject of their interventions as a "passive recipient". Moral bioenhancement does so because it aims to change the motivation of the individual by bypassing the reflection and gradual assimilation of values that should accompany any adoption of new identity traits. This constitutes a passivity that would also occur in proposals for moral AIenhancement based on ethical machines that either replace humans in decision-making, or surreptitiously direct them to do the right thing, or simply advise them based on their own supposedly undisputed values.

Sunday, August 29, 2021

A New Era of Designer Babies May Be Based on Overhyped Science

Laura Hercher
Scientific American
Originally published 12 July 21

Here is an excerpt:

Current polygenic risk scores have limited predictive strength and reflect the shortcomings of genetic databases, which are overwhelmingly Eurocentric. Alicia Martin, an instructor at Massachusetts General Hospital and the Broad Institute of the Massachusetts Institute of Technology and Harvard University, says her research examining polygenic risk scores suggests “they don’t transfer well to other populations that have been understudied.” In fact, the National Institutes of Health announced in mid-June that it will be giving out $38 million in grants over five years to find ways to enhance disease prediction in diverse populations using polygenic risk scores. Speaking of Orchid, Martin says, “I think it is premature to try to roll this out.”

In an interview about embryo screening and ethics featured on the company’s Web site, Jonathan Anomaly, a University of Pennsylvania bioethicist, suggested the current biases are a problem to be solved by getting customers and doing the testing. “As I understand it,” he said, “Orchid is actively building statistical models to improve ancestry adaptation and adjustments for genetic risk scores, which will increase accessibility of the product to all individuals.”

Still, better data sets will not allay all concerns about embryo selection. The combined expense of testing and IVF means that unequal access to these technologies will continue to be an issue. In her Mendelspod interview, Siddiqui insisted, “We think that everyone who wants to have a baby should be able to, and we want our technology to be as accessible to everyone who wants it,” adding that the lack of insurance coverage for IVF is a major problem that needs to be addressed in the U.S.

But should insurance companies pay for fertile couples to embryo-shop? This issue is complicated, especially in light of the fact that polygenic risk scores can generate predictions for more than just heart disease and cancer. They can be devised for any trait with a heritable component, and existing models offer predictions for educational attainment, neuroticism and same-sex sexual behavior, all with the same caveats and limitations as Orchid’s current tests for major diseases. To be clear, tests for these behavioral traits are not part of Orchid’s current genetic panel. But when talking about tests the company does offer, Siddiqui suggested that the ultimate decision makers should be the parents-to-be. “I think at the end of the day, you have to respect patient autonomy,” she said.

Saturday, August 28, 2021

Understanding Suicide Risk Among Children and Preteens: A Synthesis Workshop

National Institute of Mental Health
June 15, 2021

NIMH convened a four-part virtual research roundtable series, “Risk, Resilience, & Trajectories in Preteen Suicide.” The roundtables took place between January and April 2021, and culminated in a synthesis meeting in June, 2021. The series brought together a diverse group of expert panelists to assess the state of the science and short- and longer-term research priorities related to preteen suicide risk and risk trajectories. Panelists’ expertise was wide ranging and included youth suicide risk assessment and preventive interventions, developmental psychopathology, child and adolescent mood and anxiety disorders, family and peer relationships, how social and cultural contexts influence youth’s trajectories, biostatistical and computational methods, multilevel modeling, and longitudinal data analysis. 


Friday, August 27, 2021

It’s hard to be a moral person. Technology is making it harder.

Sigal Samuel
vox.com
Originally posted 3 Aug 21

Here is an excerpt:

People who point out the dangers of digital tech are often met with a couple of common critiques. The first one goes like this: It’s not the tech companies’ fault. It’s users’ responsibility to manage their own intake. We need to stop being so paternalistic!

This would be a fair critique if there were symmetrical power between users and tech companies. But as the documentary The Social Dilemma illustrates, the companies understand us better than we understand them — or ourselves. They’ve got supercomputers testing precisely which colors, sounds, and other design elements are best at exploiting our psychological weaknesses (many of which we’re not even conscious of) in the name of holding our attention. Compared to their artificial intelligence, we’re all children, Harris says in the documentary. And children need protection.

Another critique suggests: Technology may have caused some problems — but it can also fix them. Why don’t we build tech that enhances moral attention?

“Thus far, much of the intervention in the digital sphere to enhance that has not worked out so well,” says Tenzin Priyadarshi, the director of the Dalai Lama Center for Ethics and Transformative Values at MIT.

It’s not for lack of trying. Priyadarshi and designers affiliated with the center have tried creating an app, 20 Day Stranger, that gives continuous updates on what another person is doing and feeling. You get to know where they are, but never find out who they are. The idea is that this anonymous yet intimate connection might make you more curious or empathetic toward the strangers you pass every day.

They also designed an app called Mitra. Inspired by Buddhist notions of a “virtuous friend” (kalyāṇa-mitra), it prompts you to identify your core values and track how much you acted in line with them each day. The goal is to heighten your self-awareness, transforming your mind into “a better friend and ally.”

I tried out this app, choosing family, kindness, and creativity as the three values I wanted to track. For a few days, it worked great. Being primed with a reminder that I value family gave me the extra nudge I needed to call my grandmother more often. But despite my initial excitement, I soon forgot all about the app.