Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Information Processing. Show all posts
Showing posts with label Information Processing. Show all posts

Thursday, April 20, 2023

Toward Parsimony in Bias Research: A Proposed Common Framework of Belief-Consistent Information Processing for a Set of Biases

Oeberst, A., & Imhoff, R. (2023).
Perspectives on Psychological Science, 0(0).
https://doi.org/10.1177/17456916221148147

Abstract

One of the essential insights from psychological research is that people’s information processing is often biased. By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however, these biases have often been examined in separate lines of research, thereby precluding the recognition of shared principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias, egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially the specific belief that guides information processing. More importantly, we propose that different biases even share the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the dependent variable), thus tapping into different manifestations of the same latent information processing. In other words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel hypotheses that follow directly from the integrative nature of our perspective.

Conclusion

There have been many prior attempts of synthesizing and integrating research on (parts of) biased information processing (e.g., Birch & Bloom, 2004; Evans, 1989; Fiedler, 1996, 2000; Gawronski & Strack, 2012; Gilovich, 1991; Griffin & Ross, 1991; Hilbert, 2012; Klayman & Ha, 1987; Kruglanski et al., 2012; Kunda, 1990; Lord & Taylor, 2009; Pronin et al., 2004; Pyszczynski & Greenberg, 1987; Sanbonmatsu et al., 1998; Shermer, 1997; Skov & Sherman, 1986; Trope & Liberman, 1996). Some of them have made similar or overlapping arguments or implicitly made similar assumptions to the ones outlined here and thus resonate with our reasoning. In none of them, however, have we found the same line of thought and its consequences explicated.

To put it briefly, theoretical advancements necessitate integration and parsimony (the integrative potential), as well as novel ideas and hypotheses (the generative potential). We believe that the proposed framework for understanding bias as presented in this article has merits in both of these aspects. We hope to instigate discussion as well as empirical scrutiny with the ultimate goal of identifying common principles across several disparate research strands that have heretofore sought to understand human biases.


This article proposes a common framework for studying biases in information processing, aiming for parsimony in bias research. The framework suggests that biases can be understood as a result of belief-consistent information processing, and highlights the importance of considering both cognitive and motivational factors.

Friday, September 3, 2021

What is consciousness, and could machines have it?

S. Dahaene, H. Lau, & S. Kouider
Science  27 Oct 2017:
Vol. 358, Issue 6362, pp. 486-492

Abstract

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

From Concluding remarks

Our stance is based on a simple hypothesis: What we call “consciousness” results from specific types of information-processing computations, physically realized by the hardware of the brain. It differs from other theories in being resolutely computational; we surmise that mere information-theoretic quantities do not suffice to define consciousness unless one also considers the nature and depth of the information being processed.

We contend that a machine endowed with C1 and C2 would behave as though it were conscious; for instance, it would know that it is seeing something, would express confidence in it, would report it to others, could suffer hallucinations when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans. Still, such a purely functional definition of consciousness may leave some readers unsatisfied. Are we “over-intellectualizing” consciousness, by assuming that some high-level cognitive functions are necessarily tied to consciousness? Are we leaving aside the experiential component (“what it is like” to be conscious)? Does subjective experience escape a computational definition?

Although those philosophical questions lie beyond the scope of the present paper, we close by noting that empirically, in humans the loss of C1 and C2 computations covaries with a loss of subjective experience. 

Wednesday, June 9, 2021

Towards a computational theory of social groups: A finite set of cognitive primitives for representing any and all social groups in the context of conflict

Pietraszewski, D. (2021). 
Behavioral and Brain Sciences, 1-62. 
doi:10.1017/S0140525X21000583

Abstract

We don't yet have adequate theories of what the human mind is representing when it represents a social group. Worse still, many people think we do. This mistaken belief is a consequence of the state of play: Until now, researchers have relied on their own intuitions to link up the concept social group on the one hand, and the results of particular studies or models on the other. While necessary, this reliance on intuition has been purchased at considerable cost. When looked at soberly, existing theories of social groups are either (i) literal, but not remotely adequate (such as models built atop economic games), or (ii) simply metaphorical (typically a subsumption or containment metaphor). Intuition is filling in the gaps of an explicit theory. This paper presents a computational theory of what, literally, a group representation is in the context of conflict: it is the assignment of agents to specific roles within a small number of triadic interaction types. This “mental definition” of a group paves the way for a computational theory of social groups—in that it provides a theory of what exactly the information-processing problem of representing and reasoning about a group is. For psychologists, this paper offers a different way to conceptualize and study groups, and suggests that a non-tautological definition of a social group is possible. For cognitive scientists, this paper provides a computational benchmark against which natural and artificial intelligences can be held.

Summary and Conclusion

Despite an enormous literature on groups and group dynamics, little attention has been paid to explicit computational theories of how the mind represents and reasons about groups. The goal of this paper has been, in a conceptual, non-technical manner, to propose a simple but non-trivial framework for starting to ask questions about the nature of the underlying representations that make the phenomenon of social groups possible—all described at the level of information processing. This computational theory, when combined with many more such theories—and followed by extensive task analyses and empirical investigations—will eventually contribute to a full accounting of the information-processing required to represent, reason about, and act in accordance with group representations.

Monday, November 11, 2019

Why a computer will never be truly conscious

Subhash Kak
The Conversation
Originally published October 16, 2019

Here is an excerpt:

Brains don’t operate like computers

Living organisms store experiences in their brains by adapting neural connections in an active process between the subject and the environment. By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain’s information handling must also be different from how computers work.

The mind actively explores the environment to find elements that guide the performance of one action or another. Perception is not directly related to the sensory data: A person can identify a table from many different angles, without having to consciously interpret the data and then ask its memory if that pattern could be created by alternate views of an item identified some time earlier.

Another perspective on this is that the most mundane memory tasks are associated with multiple areas of the brain – some of which are quite large. Skill learning and expertise involve reorganization and physical changes, such as changing the strengths of connections between neurons. Those transformations cannot be replicated fully in a computer with a fixed architecture.

The info is here.

Monday, April 22, 2019

Moral identity relates to the neural processing of third-party moral behavior

Carolina Pletti, Jean Decety, & Markus Paulus
Social Cognitive and Affective Neuroscience
https://doi.org/10.1093/scan/nsz016

Abstract

Moral identity, or moral self, is the degree to which being moral is important to a person’s self-concept. It is hypothesized to be the “missing link” between moral judgment and moral action. However, its cognitive and psychophysiological mechanisms are still subject to debate. In this study, we used Event-Related Potentials (ERPs) to examine whether the moral self concept is related to how people process prosocial and antisocial actions. To this end, participants’ implicit and explicit moral self-concept was assessed. We examined whether individual differences in moral identity relate to differences in early, automatic processes (i.e. EPN, N2) or late, cognitively controlled processes (i.e. LPP) while observing prosocial and antisocial situations. Results show that a higher implicit moral self was related to a lower EPN amplitude for prosocial scenarios. In addition, an enhanced explicit moral self was related to a lower N2 amplitude for prosocial scenarios. The findings demonstrate that the moral self affects the neural processing of morally relevant stimuli during third-party evaluations. They support theoretical considerations that the moral self already affects (early) processing of moral information.

Here is the conclusion:

Taken together, notwithstanding some limitations, this study provides novel insights into the
nature of the moral self. Importantly, the results suggest that the moral self concept influences the
early processing of morally relevant contexts. Moreover, the implicit and the explicit moral self
concepts have different neural correlates, influencing respectively early and intermediate processing
stages. Overall, the findings inform theoretical approaches on how the moral self informs social
information processing (Lapsley & Narvaez, 2004).

Saturday, January 26, 2019

People use less information than they think to make up their minds

Nadav Klein and Ed O’Brien
PNAS December 26, 2018 115 (52) 13222-13227

Abstract

A world where information is abundant promises unprecedented opportunities for information exchange. Seven studies suggest these opportunities work better in theory than in practice: People fail to anticipate how quickly minds change, believing that they and others will evaluate more evidence before making up their minds than they and others actually do. From evaluating peers, marriage prospects, and political candidates to evaluating novel foods, goods, and services, people consume far less information than expected before deeming things good or bad. Accordingly, people acquire and share too much information in impression-formation contexts: People overvalue long-term trials, overpay for decision aids, and overwork to impress others, neglecting the speed at which conclusions will form. In today’s information age, people may intuitively believe that exchanging ever-more information will foster better-informed opinions and perspectives—but much of this information may be lost on minds long made up.

Significance

People readily categorize things as good or bad, a welcome adaptation that enables action and reduces information overload. The present research reveals an unforeseen consequence: People do not fully appreciate this immediacy of judgment, instead assuming that they and others will consider more information before forming conclusions than they and others actually do. This discrepancy in perceived versus actual information use reveals a general psychological bias that bears particular relevance in today’s information age. Presumably, one hopes that easy access to abundant information fosters uniformly more-informed opinions and perspectives. The present research suggests mere access is not enough: Even after paying costs to acquire and share ever-more information, people then stop short and do not incorporate it into their judgments.

Tuesday, November 14, 2017

What is consciousness, and could machines have it?

Stanislas Dehaene, Hakwan Lau, & Sid Kouider
Science  27 Oct 2017: Vol. 358, Issue 6362, pp. 486-492

Abstract

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

The article is here.

Thursday, October 12, 2017

New Theory Cracks Open the Black Box of Deep Learning

Natalie Wolchover
Quanta Magazine
Originally published September 21, 2017

Here is an excerpt:

In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.

Tishby’s findings have the AI community buzzing. “I believe that the information bottleneck idea could be very important in future deep neural network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said.

Some researchers remain skeptical that the theory fully accounts for the success of deep learning, but Kyle Cranmer, a particle physicist at New York University who uses machine learning to analyze particle collisions at the Large Hadron Collider, said that as a general principle of learning, it “somehow smells right.”

The article is here.

Friday, August 11, 2017

The real problem (of consciousness)

Anil K Seth
Aeon.com
Originally posted November 2, 2016

Here is an excerpt:

The classical view of perception is that the brain processes sensory information in a bottom-up or ‘outside-in’ direction: sensory signals enter through receptors (for example, the retina) and then progress deeper into the brain, with each stage recruiting increasingly sophisticated and abstract processing. In this view, the perceptual ‘heavy-lifting’ is done by these bottom-up connections. The Helmholtzian view inverts this framework, proposing that signals flowing into the brain from the outside world convey only prediction errors – the differences between what the brain expects and what it receives. Perceptual content is carried by perceptual predictions flowing in the opposite (top-down) direction, from deep inside the brain out towards the sensory surfaces. Perception involves the minimisation of prediction error simultaneously across many levels of processing within the brain’s sensory systems, by continuously updating the brain’s predictions. In this view, which is often called ‘predictive coding’ or ‘predictive processing’, perception is a controlled hallucination, in which the brain’s hypotheses are continually reined in by sensory signals arriving from the world and the body. ‘A fantasy that coincides with reality,’ as the psychologist Chris Frith eloquently put it in Making Up the Mind (2007).

Armed with this theory of perception, we can return to consciousness. Now, instead of asking which brain regions correlate with conscious (versus unconscious) perception, we can ask: which aspects of predictive perception go along with consciousness? A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. In 2001, Alvaro Pascual-Leone and Vincent Walsh at Harvard Medical School asked people to report the perceived direction of movement of clouds of drifting dots (so-called ‘random dot kinematograms’). They used TMS to specifically interrupt top-down signalling across the visual cortex, and they found that this abolished conscious perception of the motion, even though bottom-up signals were left intact.

The article is here.

Friday, April 28, 2017

How rational is our rationality?

Interview by Richard Marshall
3 AM Magazine
Originally posted March 18, 2017

Here is an excerpt:

As I mentioned earlier, I think that the point of the study of rationality, and of normative epistemology more generally, is to help us figure out how to inquire, and the aim of inquiry, I believe, is to get at the truth. This means that there had better be a close connection between what we conclude about what’s rational to believe, and what we expect to be true. But it turns out to be very tricky to say what the nature of this connection is! For example, we know that sometimes evidence can mislead us, and so rational beliefs can be false. This means that there’s no guarantee that rational beliefs will be true. The goal of the paper is to get clear about why, and to what extent, it nonetheless makes sense to expect that rational beliefs will be more accurate than irrational ones. One reason this should be of interest to non-philosophers is that if it turns out that there isn’t some close connection between rationality and truth, then we should be much less critical of people with irrational beliefs. They may reasonably say: “Sure, my belief is irrational – but I care about the truth, and since my irrational belief is true, I won’t abandon it!” It seems like there’s something wrong with this stance, but to justify why it’s wrong, we need to get clear on the connection between a judgment about a belief’s rationality and a judgment about its truth. The account I give is difficult to summarize in just a few sentences, but I can say this much: what we say about the connection between what’s rational and what’s true will depend on whether we think it’s rational to doubt our own rationality. If it can be rational to doubt our own rationality (which I think is plausible), then the connection between rationality and truth is, in a sense, surprisingly tenuous.

The interview is here.

Thursday, April 13, 2017

Humans selectively edit reality before accepting it

Olivia Goldhill
Quartz
Originally published March 26, 2017

Knowledge is power, so the saying goes, which makes it all the more striking how determined humans are to avoid useful information. Research in psychology, economics, and sociology has, over the course of several decades, highlighted countless examples of cases where humans are apt to ignore information. A review of these earlier studies by Carnegie Mellon University researchers, published this month in the Journal of Economic Literature, shows the extent to which humans avoid information and so selectively edit their own reality.

Rather than highlighting all the myriad ways humans fail to proactively seek out useful information, the paper’s authors focus on active information avoidance: Cases where individuals know information is available and have free access to that information, yet choose not to consider it. Examples of this phenomenon, revealed by the previous studies, include investors not looking at their financial portfolios when the stock market is down; patients taking STD tests and then failing to obtain the results; professionals refusing to look at their colleagues’ feedback on their work; and even the propensity of wealthy people to avoid poor neighborhoods so they don’t feel awareness of and guilt over their own privilege.

The article is here.

Tuesday, April 4, 2017

Illusions in Reasoning

Sangeet S. Khemlani & P. N. Johnson-Laird
Minds & Machines
DOI 10.1007/s11023-017-9421-x

Abstract

Some philosophers argue that the principles of human reasoning are and that mistakes are no more than momentary lapses in ‘‘information processing."  This article makes a case to the contrary. It shows that human reasoners systematic fallacies. The theory of mental models predicts these
errors. It postulates that individuals construct mental models of the possibilities to the premises of an inference refer. But, their models usually represent what is in a possibility, not what is false. This procedure reduces the load on working and for the most part it yields valid inferences. However, as a computer implementing the theory revealed, it leads to fallacious conclusions for inferences—those for which it is crucial to represent what is false in a possibility.  Experiments demonstrate the variety of these fallacies and contrast them control problems, which reasoners tend to get right. The fallacies can be illusions, and they occur in reasoning based on sentential connectives as ‘‘if’’ and ‘‘or’’, quantifiers such as ‘‘all the artists’’ and ‘‘some of the artists’’, deontic relations such as ‘‘permitted’’ and ‘‘obligated’’, and causal relations such causes’’ and ‘‘allows’’. After we have reviewed the principal results, we consider potential for alternative accounts to explain these illusory inferences. And show how the illusions illuminate the nature of human rationality.

Find it here.

Thursday, October 6, 2016

How Unconscious Bias Is Affecting Our Ability To Listen

Vivian Giang
The Fast Company
Originally published September 8, 2016

Here is an excerpt:

Meghan Sumner, an associate professor of linguistics at Stanford University, stumbled into the unconscious bias realm after years of investigating how listeners extract information from voices, and how the pieces of information are stored in our memory. Study after study, she found that we all listen differently based on where we’re from and our feelings toward different accents. It’s not a conscious choice, but the result of social biases that form unconscious stereotyping which then influences that way we listen.

"It’s not always what someone said, it’s also how they said it," Sumner tells Fast Company. "How we view people socially from their voice, influences how we attend to them, how we listen to them."

For instance, in one experiment, Sumner found that the "average American listener" preferred a "Southern Standard British English" voice rather than one who had a New York City accent, even if both voices are saying the same words. Consequently, the listener will remember more of what the English speaker says and will deem them as smarter. All of this is impacted by the stereotypes that we have of British people and New Yorkers.

The article is here.

Saturday, January 9, 2016

Moral judgment as information processing: an integrative review

Steve Guglielmo
Front Psychol. 2015; 6: 1637.
Published online 2015 Oct 30. doi:  10.3389/fpsyg.2015.01637

Abstract

How do humans make moral judgments about others’ behavior? This article reviews dominant models of moral judgment, organizing them within an overarching framework of information processing. This framework poses two distinct questions: (1) What input information guides moral judgments? and (2) What psychological processes generate these judgments? Information Models address the first question, identifying critical information elements (including causality, intentionality, and mental states) that shape moral judgments. A subclass of Biased Information Models holds that perceptions of these information elements are themselves driven by prior moral judgments. Processing Models address the second question, and existing models have focused on the relative contribution of intuitive versus deliberative processes. This review organizes existing moral judgment models within this framework and critically evaluates them on empirical and theoretical grounds; it then outlines a general integrative model grounded in information processing, and concludes with conceptual and methodological suggestions for future research. The information-processing framework provides a useful theoretical lens through which to organize extant and future work in the rapidly growing field of moral judgment.

The entire article is here.