Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Self. Show all posts
Showing posts with label Self. Show all posts

Friday, December 1, 2017

The Essence of the Individual: The Pervasive Belief in the True Self Is an Instance of Psychological Essentialism

Andrew G. Christy, Rebecca J. Schlegel, and Andrei Cimpian
Preprint

Abstract

Eight studies (N = 2,974) were conducted to test the hypothesis that the widespread folk belief in the true self is an instance of psychological essentialism. Results supported this hypothesis. Specifically, participants’ reasoning about the true self displayed the telltale features of essentialist reasoning (immutability, discreteness, consistency, informativeness, inherence, and biological basis; Studies 1–4); participants’ endorsement of true-self beliefs correlated with individual differences in other essentialist beliefs (Study 5); and experimental manipulations of essentialist thought in domains other than the self were found to “spill over” and affect the extent to which participants endorsed true-self beliefs (Studies 6–8). These findings advance theory on the origins and functions of true-self beliefs, revealing these beliefs to be a specific instance of a broader tendency to explain phenomena in the world in terms of underlying essences.

The preprint is here.

Monday, November 13, 2017

Will life be worth living in a world without work? Technological Unemployment and the Meaning of Life

John Danaher
forthcoming in Science and Engineering Ethics

Abstract

Suppose we are about to enter an era of increasing technological unemployment. What implications does this have for society? Two distinct ethical/social issues would seem to arise. The first is one of distributive justice: how will the (presumed) efficiency gains from automated labour be distributed through society? The second is one of personal fulfillment and meaning: if  people no longer have to work, what will they do with their lives? In this article, I set aside the first issue and focus on the second. In doing so, I make three arguments. First, I argue that there are good reasons to embrace non-work and that these reasons become more compelling in an era of technological unemployment. Second, I argue that the technological advances that make widespread technological unemployment possible could still threaten or undermine human flourishing and meaning, especially if (as is to be expected) they do not remain confined to the economic sphere. And third, I argue that this threat could be contained if we adopt an integrative  approach to our relationship with technology. In advancing these arguments, I draw on three distinct literatures: (i) the literature on technological unemployment and workplace automation; (ii) the antiwork critique — which I argue gives reasons to embrace technological unemployment; and (iii) the philosophical debate about the conditions for meaning in life — which I argue gives reasons for concern.

The article is here.
 

Thursday, November 2, 2017

Christian self-enhancement

Gebauer, Jochen E.; Sedikides, Constantine; & Schrade, Alexandra.
Journal of Personality and Social Psychology, Vol 113(5), Nov 2017, 786-809

Abstract

People overestimate themselves in domains that are central to their self-concept. Critically, the psychological status of this “self-centrality principle” remains unclear. One view regards the principle as an inextricable part of human nature and, thus, as universal and resistant to normative pressure. A contrasting view regards the principle as liable to pressure (and subsequent modification) from self-effacement norms, thus questioning its universality. Advocates of the latter view point to Christianity’s robust self-effacement norms, which they consider particularly effective in curbing self-enhancement, and ascribe Christianity an ego-quieting function. Three sets of studies examined the self-centrality principle among Christians. Studies 1A and 1B (N = 2,118) operationalized self-enhancement as better-than-average perceptions on the domains of commandments of faith (self-centrality: Christians ≫ nonbelievers) and commandments of communion (self-centrality: Christians > nonbelievers). Studies 2A–2H (N = 1,779) operationalized self-enhancement as knowledge overclaiming on the domains of Christianity (self-centrality: Christians ≫ nonbelievers), communion (self-centrality: Christians > nonbelievers), and agency (self-centrality: Christians ≈ nonbelievers). Studies 3A–3J (N = 1,956) operationalized self-enhancement as grandiose narcissism on the domains of communion (self-centrality: Christians > nonbelievers) and agency (self-centrality: Christians ≈ nonbelievers). The results converged across studies, yielding consistent evidence for Christian self-enhancement. Relative to nonbelievers, Christians self-enhanced strongly in domains central to the Christian self-concept. The results also generalized across countries with differing levels of religiosity. Christianity does not quiet the ego. The self-centrality principle is resistant to normative pressure, universal, and rooted in human nature.

The research can be found here.

Tuesday, October 31, 2017

Who Is Rachael? Blade Runner and Personal Identity

Helen Beebee
iai news
Originally posted October 5, 2017

It’s no coincidence that a lot of philosophers are big fans of science fiction. Philosophers like to think about far-fetched scenarios or ‘thought experiments’, explore how they play out, and think about what light they can shed on how we should think about our own situation. What if you could travel back in time? Would you be able to kill your own grandfather, thereby preventing him from meeting your grandmother, meaning that you would never have been born in the first place? What if we could somehow predict with certainty what people would do? Would that mean that nobody had free will? What if I was really just a brain wired up to a sophisticated computer running virtual reality software? Should it matter to me that the world around me – including other people – is real rather than a VR simulation? And how do I know that it’s not?

Questions such as these routinely get posed in sci-fi books and films, and in a particularly vivid and thought-provoking way. In immersing yourself in an alternative version of reality, and by identifying or sympathising with the characters and seeing things from their point of view, you can often get a much better handle on the question. Philip K. Dick – whose Do Androids Dream of Electric Sheep?, first published in 1968, is the story on which the 1982 film Blade Runner is based –  was a master at exploring these kinds of philosophical questions. Often the question itself is left unstated; his characters are generally not much prone to philosophical rumination on their situation. But it’s there in the background nonetheless, waiting for you to find it and to think about what the answer might be.

Some of the questions raised by the original Dick story don’t get any, or much, attention in Blade Runner. Mercerism – the peculiar quasi-religion of the book, which is based on empathy and which turns out to be founded on a lie  – doesn’t get a mention in the film. And while, in the film as in the book, the capacity for empathy is what (supposedly) distinguishes humans from androids (or, in the film, replicants; apparently by 1982 ‘android’ was considered too dated a word), in the film we don’t get the suggestion that the purported significance of empathy, through its role in Mercerism, is really just a ploy: a way of making everyone think that androids lack, as it were, the essence of personhood, and hence can be enslaved and bumped off with impunity.

The article is here.

Tuesday, September 12, 2017

Personal values in human life

Lilach Sagiv, Sonia Roccas, Jan Cieciuch & Shalom H. Schwartz
Nature Human Behaviour (2017)
doi:10.1038/s41562-017-0185-3

Abstract

The construct of values is central to many fields in the social sciences and humanities. The last two decades have seen a growing body of psychological research that investigates the content, structure and consequences of personal values in many cultures. Taking a cross-cultural perspective we review, organize and integrate research on personal values, and point to some of the main findings that this research has yielded. Personal values are subjective in nature, and reflect what people think and state about themselves. Consequently, both researchers and laymen sometimes question the usefulness of personal values in influencing action. Yet, self-reported values predict a large variety of attitudes, preferences and overt behaviours. Individuals act in ways that allow them to express their important values and attain the goals underlying them. Thus, understanding personal values means understanding human behaviour.

Friday, August 11, 2017

The real problem (of consciousness)

Anil K Seth
Aeon.com
Originally posted November 2, 2016

Here is an excerpt:

The classical view of perception is that the brain processes sensory information in a bottom-up or ‘outside-in’ direction: sensory signals enter through receptors (for example, the retina) and then progress deeper into the brain, with each stage recruiting increasingly sophisticated and abstract processing. In this view, the perceptual ‘heavy-lifting’ is done by these bottom-up connections. The Helmholtzian view inverts this framework, proposing that signals flowing into the brain from the outside world convey only prediction errors – the differences between what the brain expects and what it receives. Perceptual content is carried by perceptual predictions flowing in the opposite (top-down) direction, from deep inside the brain out towards the sensory surfaces. Perception involves the minimisation of prediction error simultaneously across many levels of processing within the brain’s sensory systems, by continuously updating the brain’s predictions. In this view, which is often called ‘predictive coding’ or ‘predictive processing’, perception is a controlled hallucination, in which the brain’s hypotheses are continually reined in by sensory signals arriving from the world and the body. ‘A fantasy that coincides with reality,’ as the psychologist Chris Frith eloquently put it in Making Up the Mind (2007).

Armed with this theory of perception, we can return to consciousness. Now, instead of asking which brain regions correlate with conscious (versus unconscious) perception, we can ask: which aspects of predictive perception go along with consciousness? A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. In 2001, Alvaro Pascual-Leone and Vincent Walsh at Harvard Medical School asked people to report the perceived direction of movement of clouds of drifting dots (so-called ‘random dot kinematograms’). They used TMS to specifically interrupt top-down signalling across the visual cortex, and they found that this abolished conscious perception of the motion, even though bottom-up signals were left intact.

The article is here.

Friday, July 28, 2017

I attend, therefore I am

Carolyn Dicey Jennings
Aeon.com
Originally published July 10, 2017

Here is an excerpt:

Following such considerations, the philosopher Daniel Dennett proposed that the self is simply a ‘centre of narrative gravity’ – just as the centre of gravity in a physical object is not a part of that object, but a useful concept we use to understand the relationship between that object and its environment, the centre of narrative gravity in us is not a part of our bodies, a soul inside of us, but a useful concept we use to make sense of the relationship between our bodies, complete with their own goals and intentions, and our environment. So, you, you, are a construct, albeit a useful one. Or so goes Dennett’s thinking on the self.

And it isn’t just Dennett. The idea that there is a substantive self is passé. When cognitive scientists aim to provide an empirical account of the self, it is simply an account of our sense of self – why it is that we think we have a self. What we don’t find is an account of a self with independent powers, responsible for directing attention and resolving conflicts of will.

There are many reasons for this. One is that many scientists think that the evidence counts in favour of our experience in general being epiphenomenal – something that does not influence our brain, but is influenced by it. In this view, when you experience making a tough decision, for instance, that decision was already made by your brain, and your experience is mere shadow of that decision. So for the very situations in which we might think the self is most active – in resolving difficult decisions – everything is in fact already achieved by the brain.

The article is here.

Tuesday, April 18, 2017

‘Your animal life is over. Machine life has begun.’

Mark O'Connell
The Guardian
Originally published March 25, 2017

Here is an excerpt:

The relevant science for whole brain emulation is, as you’d expect, hideously complicated, and its interpretation deeply ambiguous, but if I can risk a gross oversimplification here, I will say that it is possible to conceive of the idea as something like this: first, you scan the pertinent information in a person’s brain – the neurons, the endlessly ramifying connections between them, the information-processing activity of which consciousness is seen as a byproduct – through whatever technology, or combination of technologies, becomes feasible first (nanobots, electron microscopy, etc). That scan then becomes a blueprint for the reconstruction of the subject brain’s neural networks, which is then converted into a computational model. Finally, you emulate all of this on a third-party non-flesh-based substrate: some kind of supercomputer or a humanoid machine designed to reproduce and extend the experience of embodiment – something, perhaps, like Natasha Vita-More’s Primo Posthuman.

The whole point of substrate independence, as Koene pointed out to me whenever I asked him what it would be like to exist outside of a human body, – and I asked him many times, in various ways – was that it would be like no one thing, because there would be no one substrate, no one medium of being. This was the concept transhumanists referred to as “morphological freedom” – the liberty to take any bodily form technology permits.

“You can be anything you like,” as an article about uploading in Extropy magazine put it in the mid-90s. “You can be big or small; you can be lighter than air and fly; you can teleport and walk through walls. You can be a lion or an antelope, a frog or a fly, a tree, a pool, the coat of paint on a ceiling.”

The article is here.

Thursday, April 6, 2017

Would You Deliver an Electric Shock in 2015?

Dariusz Doliński, Tomasz Grzyb, Tomasz Grzyb and others
Social Psychological and Personality Science
First Published January 1, 2017

Abstract

In spite of the over 50 years which have passed since the original experiments conducted by Stanley Milgram on obedience, these experiments are still considered a turning point in our thinking about the role of the situation in human behavior. While ethical considerations prevent a full replication of the experiments from being prepared, a certain picture of the level of obedience of participants can be drawn using the procedure proposed by Burger. In our experiment, we have expanded it by controlling for the sex of participants and of the learner. The results achieved show a level of participants’ obedience toward instructions similarly high to that of the original Milgram studies. Results regarding the influence of the sex of participants and of the “learner,” as well as of personality characteristics, do not allow us to unequivocally accept or reject the hypotheses offered.

The article is here.

“After 50 years, it appears nothing has changed,” said social psychologist Tomasz Grzyb, an author of the new study, which appeared this week in the journal Social Psychological and Personality Science.

A Los Angeles Times article summaries the study here.

Friday, March 24, 2017

A cleansing fire: Moral outrage alleviates guilt and buffers threats to one’s moral identity

Rothschild, Z.K. & Keefer, L.A.
Motiv Emot (2017). doi:10.1007/s11031-017-9601-2

Abstract

Why do people express moral outrage? While this sentiment often stems from a perceived violation of some moral principle, we test the counter-intuitive possibility that moral outrage at third-party transgressions is sometimes a means of reducing guilt over one’s own moral failings and restoring a moral identity. We tested this guilt-driven account of outrage in five studies examining outrage at corporate labor exploitation and environmental destruction. Study 1 showed that personal guilt uniquely predicted moral outrage at corporate harm-doing and support for retributive punishment. Ingroup (vs. outgroup) wrongdoing elicited outrage at corporations through increased guilt, while the opportunity to express outrage reduced guilt (Study 2) and restored perceived personal morality (Study 3). Study 4 tested whether effects were due merely to downward social comparison and Study 5 showed that guilt-driven outrage was attenuated by an affirmation of moral identity in an unrelated context.

The article is here.

Tuesday, January 3, 2017

Traces of Times Lost

Erika Hayasaki
The Atlantic
Originally posted November 29, 2016

Here is an excerpt:

According to a 2010 study in Developmental Psychology, 20 percent of children interviewed under age 10 remembered events that occurred (and were verified by parents) before they even turned a year old—in some cases even as early as one month old. These are provocative findings. Yet Katherine Nelson, a developmental psychologist at City University of New York who studied child memory for decades, tells me: “It is still an open question as to whether and when very young children have true episodic memories.” Even if they appear to, she explains, these memories are fragile and susceptible to suggestion.

(cut)

Last year, researchers from Yale University and the University of Arizona published a study in Psychological Science proclaiming that morality is more central to identity than memory. The authors studied patients with frontotemporal dementia (in which damage to the brain’s prefrontal cortex can lead to dishonesty and socially unacceptable behavior), amyotrophic lateral sclerosis (also known as Lou Gehrig’s disease, which affects muscle control), and Alzheimer’s disease (which robs a person of memory). The research found that as long as moral capacity is not impaired, the self persists, even when memory is compromised. “These results speak to significant and longstanding questions about the nature of identity, questions that have occupied social scientists, neurologists, philosophers, and novelists alike,” the authors write.

The article is here.

Thursday, December 29, 2016

The True Self: A psychological concept distinct from the self.

Strohminger N., Newman, G., and Knobe, J. (in press).
Perspectives on Psychological Science.

A long tradition of psychological research has explored the distinction between characteristics that are part of the self and those that lie outside of it. Recently, a surge of research has begun examining a further distinction. Even among characteristics that are internal to the self, people pick out a subset as belonging to the true self. These factors are judged as making people who they really are, deep down. In this paper, we introduce the concept of the true self and identify features that distinguish people’s
understanding of the true self from their understanding of the self more generally. In particular, we consider recent findings that the true self is perceived as positive and moral, and that this tendency is actor-observer invariant and cross-culturally stable. We then explore possible explanations for these findings and discuss their implications for a variety of issues in psychology.

The paper is here.

Tuesday, November 8, 2016

The Illusion of Moral Superiority

Ben M. Tappin and Ryan T. McKay
Social Psychological and Personality Science
2016, 1-9

Abstract

Most people strongly believe they are just, virtuous, and moral; yet regard the average person as distinctly less so. This invites accusations of irrationality in moral judgment and perception—but direct evidence of irrationality is absent. Here, we quantify this irrationality and compare it against the irrationality in other domains of positive self-evaluation. Participants (N ¼ 270) judged themselves and the average person on traits reflecting the core dimensions of social perception: morality, agency, and sociability.  Adapting new methods, we reveal that virtually all individuals irrationally inflated their moral qualities, and the absolute and relative magnitude of this irrationality was greater than that in the other domains of positive self-evaluation. Inconsistent with prevailing theories of overly positive self-belief, irrational moral superiority was not associated with self-esteem. Taken together, these findings suggest that moral superiority is a uniquely strong and prevalent form of ‘‘positive illusion,’’ but the underlying function remains unknown.

The article is here.

Thursday, August 4, 2016

Undermining Belief in Free Will Diminishes True Self-Knowledge

Elizabeth Seto and Joshua A. Hicks
Disassociating the Agent From the Self
Social Psychological and Personality Science 1948550616653810, first published on June 17, 2016 doi:10.1177/1948550616653810

Undermining the belief in free will influences thoughts and behavior, yet little research has explored its implications for the self and identity. The current studies examined whether lowering free will beliefs reduces perceived true self-knowledge. First, a new free will manipulation was validated. Next, in Study 1, participants were randomly assigned to high belief or low belief in free will conditions and completed measures of true self-knowledge. In Study 2, participants completed the same free will manipulation and a moral decision-making task. We then assessed participants’ perceived sense of authenticity during the task. Results illustrated that attenuating free will beliefs led to less self-knowledge, such that participants reported feeling more alienated from their true selves and experienced lowered perceptions of authenticity while making moral decisions. The interplay between free will and the true self are discussed.

Tuesday, February 9, 2016

Ethical dissonance, justifications, and moral behavior

Rachel Barkan, Shahar Ayal, and Dan Ariely
Current Opinion in Psychology
Volume 6, December 2015, Pages 157–161

Abstract

Ethical dissonance is triggered by the inconsistency between the aspiration to uphold a moral self-image and the temptation to benefit from unethical behavior. In terms of a temporal distinction anticipated dissonance occurs before people commit a moral-violation. In contrast, experienced dissonance occurs after people realize they have violated their moral code. We review the psychological mechanisms and justifications people use to reduce ethical dissonance in order to benefit from wrongdoing and still feel moral. We then offer harnessing anticipated-dissonance to help people resist temptation, and utilize experienced-dissonance to prompt moral compensation and atonement. We argue that rather than viewing ethical dissonance as a threat to self-image, we should help people see it as the gate-keeper of their morality.

Highlights

• Ethical dissonance represents the tension between moral-self and unethical behavior.
• Justifications reduce ethical dissonance, allowing to do wrong and feel moral.
• Ethical dissonance can be anticipated before, or experienced after, the violation.
• Effective moral interventions can harness ethical dissonance as a moral gate-keeper.

The article is here.

Tuesday, January 5, 2016

Neuroethics

Richard Marshall interviews Kathinka Evers
3:AM Magazine
Originally published December 20, 2015

Here is an excerpt:

So far, researchers in neuroethics have focused mainly on the ethics of neuroscience, or applied neuroethics, such as ethical issues involved in neuroimaging techniques, cognitive enhancement, or neuropharmacology. Another important, though as yet less prevalent, scientific approach that I refer to as fundamental neuroethics questions how knowledge of the brain’s functional architecture and its evolution can deepen our understanding of personal identity, consciousness and intentionality, including the development of moral thought and judgment. Fundamental neuroethics should provide adequate theoretical foundations required in order properly to address problems of applications.

The initial question for fundamental neuroethics to answer is: how can natural science deepen our understanding of moral thought? Indeed, is the former at all relevant for the latter? One can see this as a sub-question of the question whether human consciousness can be understood in biological terms, moral thought being a subset of thought in general. That is certainly not a new query, but a version of the classical mind-body problem that has been discussed for millennia and in quite modern terms from the French Enlightenment and onwards. What is comparatively new is the realisation of the extent to which ancient philosophical problems emerge in the rapidly advancing neurosciences, such as whether or not the human species as such possesses a free will, what it means to have personal responsibility, to be a self, the relations between emotions and cognition, or between emotions and memory.

The interview is here.

Sunday, January 3, 2016

Is It Immoral for Me to Dictate an Accelerated Death for My Future Demented Self?

By Norman L. Cantor
Harvard Law Blog
Originally posted December 2, 2015

I am obsessed with avoiding severe dementia. As a person who has always valued intellectual function, the prospect of lingering in a dysfunctional cognitive state is distasteful — an intolerable indignity. For me, such mental debilitation soils the remembrances to be left with my survivors and undermines the life narrative as a vibrant, thinking, and articulate figure that I assiduously cultivated. (Burdening others is also a distasteful prospect, but it is the vision of intolerable indignity that drives my planning of how to respond to a diagnosis of progressive dementia such as Alzheimers).

(cut)

I suggest that while a demented persona no longer recalls the values underlying the AD and cannot now be offended by breaches of value-based instructions, those considered instructions are still worthy of respect. As noted, the well established mechanism — an AD – is intended to enable a person to govern the medical handling of their future demented self. And the values and principles underlying advance instructions can certainly include factors beyond the patient’s contemporaneous well being.

The entire blog post is here.

Wednesday, December 23, 2015

Men at Work

By Allison J. Pugh
Aeon
Originally posted December 4, 2015

Here are two excerpts:

One option is to get angry. When I interviewed laid-off men for my recent book on job insecurity, their anger, or more often a wry bitterness, was impossible to forget. By and large, like Gary the laid-off tradesman, they were not angry at their employers. At home, however, they sounded a different note. ‘I have a very set opinion of relationships and how females handle them,’ Gary told me, rather flatly. ‘It’s what I’ve seen consistently throughout my life.’ On his third serious relationship, Gary talked about the ‘hurt that’s been caused to me by a lack of commitment on the part of other people’, and he complained that ‘marriage can be tossed out like a Pepsi can’. In the winds of uncertainty, Gary’s anger at women keeps him grounded.

(cut)

Nonetheless, most working‑class men such as Gary are trapped by a changing economy and an intransigent masculinity. Faced with changes that reduce the options for less-educated men, they have essentially three choices, none of them very likely. They can pursue more education than their family background or their school success has prepared them for. They can find a low-wage job in a high-growth sector, positions that are often considered women’s work, such as a certified nurse practitioner or retail cashier. Or they can take on more of the domestic labour at home, enabling their partners to take on more work to provide for the household. These are ‘choices’ that either force them to be class pioneers or gender insurgents in their quest for a sustainable heroism; while both are laudable, we can hardly expect them of most men, and yet this is precisely the dilemma that faces men today.

The article is here.

Note from me: This article not about the standard issues in ethics. However, it does bring up the issue of competence. Do we, as psychologists, understand the culture of males in a changing economic system? And, is the changing economic picture a factor in the increase in white, male suicides?

Wednesday, November 11, 2015

Putting a price on empathy: against incentivising moral enhancement

By Sarah Carter
J Med Ethics 
doi:10.1136/medethics-2015-102804

Abstract

Concerns that people would be disinclined to voluntarily undergo moral enhancement have led to suggestions that an incentivised programme should be introduced to encourage participation. This paper argues that, while such measures do not necessarily result in coercion or undue inducement (issues with which one may typically associate the use of incentives in general), the use of incentives for this purpose may present a taboo trade-off. This is due to empirical research suggesting that those characteristics likely to be affected by moral enhancement are often perceived as fundamental to the self; therefore, any attempt to put a price on such traits would likely be deemed morally unacceptable by those who hold this view. A better approach to address the possible lack of participation may be to instead invest in alternative marketing strategies and remove incentives altogether.

Wednesday, October 28, 2015

The predictive brain and the “free will” illusion

Dirk De Ridder, Jan Verplaetse and Sven Vanneste
Front. Psychol., 30 April 2013
http://dx.doi.org/10.3389/fpsyg.2013.00131

Here is an excerpt:

From an evolutionary point of our experience of “free will” can best be approached by the development of flexible behavioral decision making (Brembs, 2011). Predators can very easily take advantage of deterministic flight reflexes by predicting future prey behavior (Catania, 2009). The opposite, i.e., random behavior is unpredictable but highly inefficient. Thus learning mechanisms evolved to permit flexible behavior as a modification of reflexive behavioral strategies (Brembs, 2011). In order to do so, not one, but multiple representations and action patterns should be generated by the brain, as has already been proposed by von Helmholtz. He found the eye to be optically too poor for vision to be possible, and suggested vision ultimately depended on computational inference, i.e., predictions, based on assumptions and conclusions from incomplete data, relying on previous experiences. The fact that multiple predictions are generated could for example explain the Rubin vase illusion, the Necker cube and the many other stimuli studied in perceptual rivalry, even in monocular rivalry. Which percept or action plan is selected is determined by which prediction is best adapted to the environment that is actively explored (Figure 1A). In this sense, predictive selection of the fittest action plan is analogous to the concept of Darwinian selection of the fittest in natural and sexual selection in evolutionary biology, as well as to the Mendelian selection of the fittest allele in genetics and analogous the selection of the fittest quantum state in physics (Zurek, 2009). Bayesian statistics can be used to select the model with the highest updated likelihood based on environmental new information (Campbell, 2011). What all these models have in common is the fact that they describe adaptive mechanisms to an ever changing environment (Campbell, 2011).

The entire article is here.