Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Unconscious Processes. Show all posts
Showing posts with label Unconscious Processes. Show all posts

Saturday, June 22, 2024

The Ethical Implications of Illusionism

Frankish, K.
Neuroethics 17, 28 (2024).

Abstract

Illusionism is a revisionary view of consciousness, which denies the existence of the phenomenal properties traditionally thought to render experience conscious. The view has theoretical attractions, but some think it also has objectionable ethical implications. They take illusionists to be denying the existence of consciousness itself, or at least of the thing that gives consciousness its ethical value, and thus as undermining our established ethical attitudes. This article responds to this objection. I argue that, properly understood, illusionism neither denies the existence of consciousness nor entails that consciousness does not ground ethical value. It merely offers a different account of what consciousness is and why it grounds ethical value. The article goes on to argue that the theoretical revision proposed by illusionists does have some indirect implications for our ethical attitudes but that these are wholly attractive and progressive ones. The illusionist perspective on consciousness promises to make ethical decision making easier and to extend the scope of our ethical concern. Illusionism is good news.

The article is free, and linked above.

Here are some important points:

The illusionist perspective argues that our conscious experiences and choices are not the result of free will, but rather the product of unconscious neural processes and external factors beyond our control. This view suggests that we should shift our focus from solely blaming individuals for their actions to considering the external factors (e.g., social structures, environmental influences) that shape behavior. Ethicists must reevaluate the concept of individual responsibility and moral condemnation, as people's choices and actions may not be entirely their own. Instead, a more nuanced and empathetic approach that acknowledges the complex interplay of forces influencing human behavior is necessary for ethical decision-making.

Moreover, the illusionist perspective has the potential to expand the scope of our ethical concern. If conscious experiences are not real in the way we typically assume, then the boundaries of moral consideration may need to be extended beyond just conscious beings. This could have significant implications for ethical debates surrounding the treatment of non-human animals, artificial intelligence, and even the environment. Ethicists must grapple with these profound questions as our understanding of consciousness evolves.

Monday, May 30, 2022

Free will without consciousness?

L. Mudrik, I. G. Arie, et al.
Trends in Cognitive Sciences
Available online 12 April 2022

Abstract

Findings demonstrating decision-related neural activity preceding volitional actions have dominated the discussion about how science can inform the free will debate. These discussions have largely ignored studies suggesting that decisions might be influenced or biased by various unconscious processes. If these effects are indeed real, do they render subjects’ decisions less free or even unfree? Here, we argue that, while unconscious influences on decision-making do not threaten the existence of free will in general, they provide important information about limitations on freedom in specific circumstances. We demonstrate that aspects of this long-lasting controversy are empirically testable and provide insight into their bearing on degrees of freedom, laying the groundwork for future scientific-philosophical approaches.

Highlights
  • A growing body of literature argues for unconscious effects on decision-making.
  • We review a body of such studies while acknowledging methodological limitations, and categorize the types of unconscious influence reported.
  • These effects intuitively challenge free will, despite being generally overlooked in the free will literature. To what extent can decisions be free if they are affected by unconscious factors?
  • Our analysis suggests that unconscious influences on behavior affect degrees of control or reasons-responsiveness. We argue that they do not threaten the existence of free will in general, but only the degree to which we can be free in specific circumstances.

Concluding remarks

Current findings of unconscious effects on decision-making do not threaten the existence of free will in general. Yet, the results still show ways in which our freedom can be compromised under specific circumstances. More experimental and philosophical work is needed to delineate the limits and scope of these effects on our freedom (see Outstanding questions). We have evolved to be the decision-makers that we are; thus, our decisions are affected by biases, internal states, and external contexts. However, we can at least sometimes resist those, if we want, and this ability to resist influences contrary to our preferences and reasons is considered a central feature of freedom. As long as this ability is preserved, and the reviewed findings do not suggest otherwise, we are still free, at least usually and to a significant degree.

Friday, September 3, 2021

What is consciousness, and could machines have it?

S. Dahaene, H. Lau, & S. Kouider
Science  27 Oct 2017:
Vol. 358, Issue 6362, pp. 486-492

Abstract

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

From Concluding remarks

Our stance is based on a simple hypothesis: What we call “consciousness” results from specific types of information-processing computations, physically realized by the hardware of the brain. It differs from other theories in being resolutely computational; we surmise that mere information-theoretic quantities do not suffice to define consciousness unless one also considers the nature and depth of the information being processed.

We contend that a machine endowed with C1 and C2 would behave as though it were conscious; for instance, it would know that it is seeing something, would express confidence in it, would report it to others, could suffer hallucinations when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans. Still, such a purely functional definition of consciousness may leave some readers unsatisfied. Are we “over-intellectualizing” consciousness, by assuming that some high-level cognitive functions are necessarily tied to consciousness? Are we leaving aside the experiential component (“what it is like” to be conscious)? Does subjective experience escape a computational definition?

Although those philosophical questions lie beyond the scope of the present paper, we close by noting that empirically, in humans the loss of C1 and C2 computations covaries with a loss of subjective experience. 

Friday, January 31, 2020

Strength of conviction won’t help to persuade when people disagree

Brain areaPressor
ucl.ac.uk
Originally poste 16 Dec 19

The brain scanning study, published in Nature Neuroscience, reveals a new type of confirmation bias that can make it very difficult to alter people’s opinions.

“We found that when people disagree, their brains fail to encode the quality of the other person’s opinion, giving them less reason to change their mind,” said the study’s senior author, Professor Tali Sharot (UCL Psychology & Language Sciences).

For the study, the researchers asked 42 participants, split into pairs, to estimate house prices. They each wagered on whether the asking price would be more or less than a set amount, depending on how confident they were. Next, each lay in an MRI scanner with the two scanners divided by a glass wall. On their screens they were shown the properties again, reminded of their own judgements, then shown their partner’s assessment and wagers, and finally were asked to submit a final wager.

The researchers found that, when both participants agreed, people would increase their final wagers to larger amounts, particularly if their partner had placed a high wager.

Conversely, when the partners disagreed, the opinion of the disagreeing partner had little impact on people’s wagers, even if the disagreeing partner had placed a high wager.

The researchers found that one brain area, the posterior medial prefrontal cortex (pMFC), was involved in incorporating another person’s beliefs into one’s own. Brain activity differed depending on the strength of the partner’s wager, but only when they were already in agreement. When the partners disagreed, there was no relationship between the partner’s wager and brain activity in the pMFC region.

The info is here.

Thursday, January 9, 2020

How implicit bias harms patient care

Jeff Bendix
medicaleconomics.com
Originally posted 25 Nov 19

Here is an excerpt:

While many people have difficulty acknowledging that their actions are influenced by unconscious biases, the concept is particularly troubling for doctors, who have been trained to view—and treat—patients equally, and the vast majority of whom sincerely believe that they do.

“Doctors have been molded throughout medical school and all our training to be non-prejudiced when it comes to treating patients,” says James Allen, MD, a pulmonologist and medical director of University Hospital East, part of Ohio State University’s Wexner Medical Center. “It’s not only asked of us, it’s demanded of us, so many physicians would like to think they have no biases. But it’s not true. All human beings have biases.”

“Among physicians, there’s a stigma attached to any suggestion of racial bias,” adds Penner. “And were a person to be identified that way, there could be very severe consequences in terms of their career prospects or even maintaining their license.”

Ironically, as Penner and others point out, the conditions under which most doctors practice today—high levels of stress, frequent distractions, and brief visits that allow little time to get to know patients--are the ones most likely to heighten their vulnerability to unintentional biases.

“A doctor under time pressure from a backlog of overdue charting and whatever else they’re dealing with will have a harder time treating all patients with the same level of empathy and concern,” van Ryn says.

The info is here.

Saturday, August 24, 2019

Decoding the neuroscience of consciousness

Emily Sohn
Nature.com
Originally published July 24, 2019

Here is an excerpt:

That disconnect might also offer insight into why current medications for anxiety do not always work as well as people hope, LeDoux says. Developed through animal studies, these medications might target circuits in the amygdala and affect a person’s behaviours, such as their level of timidity — making it easier for them to go to social events. But such drugs don’t necessarily affect the conscious experience of fear, which suggests that future treatments might need to address both unconscious and conscious processes separately. “We can take a brain-based approach that sees these different kinds of symptoms as products of different circuits, and design therapies that target the different circuits systematically,” he says. “Turning down the volume doesn’t change the song — only its level.”

Psychiatric disorders are another area of interest for consciousness researchers, Lau says, on the basis that some mental-health conditions, including schizophrenia, obsessive–compulsive disorder and depression, might be caused by problems at the unconscious level — or even by conflicts between conscious and unconscious pathways. The link is only hypothetical so far, but Seth has been probing the neural basis of hallucinations with a ‘hallucination machine’ — a virtual-reality program that uses machine learning to simulate visual hallucinatory experiences in people with healthy brains. Through experiments, he and his colleagues have shown that these hallucinations resemble the types of visions that people experience while taking psychedelic drugs, which have increasingly been used as a tool to investigate the neural underpinnings of consciousness.

If researchers can uncover the mechanisms behind hallucinations, they might be able to manipulate the relevant areas of the brain and, in turn, treat the underlying cause of psychosis — rather than just address the symptoms. By demonstrating how easy it is to manipulate people’s perceptions, Seth adds, the work suggests that our sense of reality is just another facet of how we experience the world.

The info is here.

Tuesday, August 20, 2019

What Alan Dershowitz taught me about morality

Molly Roberts
The Washington Post
Originally posted August 2, 2019

Here are two excerpts:

Dershowitz has been defending Donald Trump on television for years, casting himself as a warrior for due process. Now, Dershowitz is defending himself on TV, too, against accusations at the least that he knew about Epstein allegedly trafficking underage girls for sex with men, and at the worst that he was one of the men.

These cases have much in common, and they both bring me back to the classroom that day when no one around the table — not the girl who invoked Ernest Hemingway’s hedonism, nor the boy who invoked God’s commandments — seemed to know where our morality came from. Which was probably the point of the exercise.

(cut)

You can make a convoluted argument that investigations of the president constitute irresponsible congressional overreach, but contorting the Constitution is your choice, and the consequences to the country of your contortion are yours to own, too. Everyone deserves a defense, but lawyers in private practice choose their clients — and putting a particular focus on championing those Dershowitz calls the “most unpopular, most despised” requires grappling with what it means for victims when an abuser ends up with a cozy plea deal.

When the alleged abuser is your friend Jeffrey, whose case you could have avoided precisely because you have a personal relationship, that grappling is even more difficult. Maybe it’s still all worth it to keep the system from falling apart, because next time it might not be a billionaire financier who wanted to seed the human race with his DNA on the stand, but a poor teenager framed for a crime he didn’t commit.

Dershowitz once told the New York Times he regretted taking Epstein’s case. He told me, “I would do it again.”

The info is here.

Monday, May 13, 2019

How has President Trump changed white Christians' views of 'morality'?

Brandon Showalter
The Christian Post
Originally published April 26, 2019

A notable shift has taken place within the past decade regarding how white evangelicals consider "morality" with regard to the politicians they support.

While the subject was frequently discussed during the 2016 election cycle in light of significant support then-candidate Donald Trump received from evangelical Christians, the attitude shift related to what an elected official does in his private life having any bearing on his public duties appears to have persisted over two years into his presidency, The Washington Post noted Thursday.

A 2011 Public Religion and Research Institute and Religion News Service poll found that 60 percent of white evangelicals believed that a public official who “commits an immoral act in their personal life” cannot still “behave ethically and fulfill their duties in their public and professional life.”

By October 2016, however, shortly after the release of the “Access Hollywood” tape in which President Trump was heard making lewd comments, another PRRI poll found that only 20 percent of white evangelicals answered the same question the same way.

No other religious demographic saw such a profound change.

The info is here.

Monday, April 22, 2019

Moral identity relates to the neural processing of third-party moral behavior

Carolina Pletti, Jean Decety, & Markus Paulus
Social Cognitive and Affective Neuroscience
https://doi.org/10.1093/scan/nsz016

Abstract

Moral identity, or moral self, is the degree to which being moral is important to a person’s self-concept. It is hypothesized to be the “missing link” between moral judgment and moral action. However, its cognitive and psychophysiological mechanisms are still subject to debate. In this study, we used Event-Related Potentials (ERPs) to examine whether the moral self concept is related to how people process prosocial and antisocial actions. To this end, participants’ implicit and explicit moral self-concept was assessed. We examined whether individual differences in moral identity relate to differences in early, automatic processes (i.e. EPN, N2) or late, cognitively controlled processes (i.e. LPP) while observing prosocial and antisocial situations. Results show that a higher implicit moral self was related to a lower EPN amplitude for prosocial scenarios. In addition, an enhanced explicit moral self was related to a lower N2 amplitude for prosocial scenarios. The findings demonstrate that the moral self affects the neural processing of morally relevant stimuli during third-party evaluations. They support theoretical considerations that the moral self already affects (early) processing of moral information.

Here is the conclusion:

Taken together, notwithstanding some limitations, this study provides novel insights into the
nature of the moral self. Importantly, the results suggest that the moral self concept influences the
early processing of morally relevant contexts. Moreover, the implicit and the explicit moral self
concepts have different neural correlates, influencing respectively early and intermediate processing
stages. Overall, the findings inform theoretical approaches on how the moral self informs social
information processing (Lapsley & Narvaez, 2004).

Monday, April 1, 2019

Neuroscience Readies for a Showdown Over Consciousness Ideas

Philip Ball
Quanta Magazine
Originally published March 6, 2019

Here is an excerpt:

Philosophers have debated the nature of consciousness and whether it can inhere in things other than humans for thousands of years, but in the modern era, pressing practical and moral implications make the need for answers more urgent. As artificial intelligence (AI) grows increasingly sophisticated, it might become impossible to tell whether one is dealing with a machine or a human  merely by interacting with it — the classic Turing test. But would that mean AI deserves moral consideration?

Understanding consciousness also impinges on animal rights and welfare, and on a wide range of medical and legal questions about mental impairments. A group of more than 50 leading neuroscientists, psychologists, cognitive scientists and others recently called for greater recognition of the importance of research on this difficult subject. “Theories of consciousness need to be tested rigorously and revised repeatedly amid the long process of accumulation of empirical evidence,” the authors said, adding that “myths and speculative conjectures also need to be identified as such.”

You can hardly do experiments on consciousness without having first defined it. But that’s already difficult because we use the word in several ways. Humans are conscious beings, but we can lose consciousness, for example under anesthesia. We can say we are conscious of something — a strange noise coming out of our laptop, say. But in general, the quality of consciousness refers to a capacity to experience one’s existence rather than just recording it or responding to stimuli like an automaton. Philosophers of mind often refer to this as the principle that one can meaningfully speak about what it is to be “like” a conscious being — even if we can never actually have that experience beyond ourselves.

The info is here.

Monday, December 31, 2018

How free is our will?

Kevin Mitchell
Wiring The Brain Blog
Originally posted November 25, 2018

Here is an excerpt:

Being free – to my mind at least – doesn’t mean making decisions for no reasons, it means making them for your reasons. Indeed, I would argue that this is exactly what is required to allow any kind of continuity of the self. If you were just doing things on a whim all the time, what would it mean to be you? We accrue our habits and beliefs and intentions and goals over our lifetime, and they collectively affect how actions are suggested and evaluated.

Whether we are conscious of that is another question. Most of our reasons for doing things are tacit and implicit – they’ve been wired into our nervous systems without our even being aware of them. But they’re still part of us ­– you could argue they’re precisely what makes us us. Even if most of that decision-making happens subconsciously, it’s still you doing it.

Ultimately, whether you think you have free will or not may depend less on the definition of “free will” and more on the definition of “you”. If you identify just as the president – the decider-in-chief – then maybe you’ll be dismayed at how little control you seem to have or how rarely you really exercise it. (Not never, but maybe less often than your ego might like to think).

But that brings us back to a very dualist position, identifying you with only your conscious mind, as if it can somehow be separated from all the underlying workings of your brain. Perhaps it’s more appropriate to think that you really comprise all of the machinery of government, even the bits that the president never sees or is not even aware exists.

The info is here.

Thursday, December 13, 2018

Does deciding among morally relevant options feel like making a choice? How morality constrains people’s sense of choice

Kouchaki, M., Smith, I. H., & Savani, K. (2018).
Journal of Personality and Social Psychology, 115(5), 788-804.
http://dx.doi.org/10.1037/pspa0000128

Abstract

We demonstrate that a difference exists between objectively having and psychologically perceiving multiple-choice options of a given decision, showing that morality serves as a constraint on people’s perceptions of choice. Across 8 studies (N = 2,217), using both experimental and correlational methods, we find that people deciding among options they view as moral in nature experience a lower sense of choice than people deciding among the same options but who do not view them as morally relevant. Moreover, this lower sense of choice is evident in people’s attentional patterns. When deciding among morally relevant options displayed on a computer screen, people devote less visual attention to the option that they ultimately reject, suggesting that when they perceive that there is a morally correct option, they are less likely to even consider immoral options as viable alternatives in their decision-making process. Furthermore, we find that experiencing a lower sense of choice because of moral considerations can have downstream behavioral consequences: after deciding among moral (but not nonmoral) options, people (in Western cultures) tend to choose more variety in an unrelated task, likely because choosing more variety helps them reassert their sense of choice. Taken together, our findings suggest that morality is an important factor that constrains people’s perceptions of choice, creating a disjunction between objectively having a choice and subjectively perceiving that one has a choice.

A pdf can be found here.

Thursday, August 23, 2018

Implicit Bias in Patient Care: An Endemic Blight on Quality Care

JoAnn Grif Alspach
Critical Care Nurse
August 2018 vol. 38 no. 4 12-16

Here is an excerpt:

How Implicit Bias Is Manifested

A systematic review by Hall and colleagues revealed that implicit bias is manifested in 4 key areas: patient-provider interactions, treatment decisions, treatment adherence, and patient health outcomes. How a physician communicates, including verbal cues, body language, and nonverbal behavior (physical proximity, frequency of eye contact) may manifest subconscious bias.7,10 Several investigators found evidence that providers interact more effectively with white than nonwhite patients. Bias may affect the nature and extent of diagnostic assessments and the range and scope of therapies considered. Nonwhite patients receive fewer cardiovascular interventions and kidney transplants. One meta-analysis found that 20 of 25 assumption method studies demonstrated bias either in the diagnosis, treatment recommendations, number of questions asked, or tests ordered. Women are 3 times less likely than men to receive knee arthroplasty despite comparable indications. Bias can detrimentally affect whether patients seek or return for care, follow treatment protocols, and, perhaps cumulatively, can influence outcomes of care. Numerous research studies offer evidence that implicit bias is associated with higher complication rates, greater morbidity, and higher patient mortality.

The info is here.

Sunday, February 18, 2018

Responsibility and Consciousness

Matt King and Peter Carruthers

1. Introduction

Intuitively, consciousness matters for responsibility. A lack of awareness generally provides the
basis for an excuse, or at least for blameworthiness to be mitigated. If you are aware that what
you are doing will unjustifiably harm someone, it seems you are more blameworthy for doing so
than if you harm them without awareness. There is thus a strong presumption that consciousness
is important for responsibility. The position we stake out below, however, is that consciousness,
while relevant to moral responsibility, isn’t necessary.

The background for our discussion is an emerging consensus in the cognitive sciences
that a significant portion, perhaps even a substantial majority, of our mental lives takes place
unconsciously. For example, routine and habitual actions are generally guided by the so-called
“dorsal stream” of the visual system, whose outputs are inaccessible to consciousness (Milner &
Goodale 1995; Goodale 2014). And there has been extensive investigation of the processes that
accompany conscious as opposed to unconscious forms of experience (Dehaene 2014). While
there is room for disagreement at the margins, there is little doubt that our actions are much more
influenced by unconscious factors than might intuitively seem to be the case. At a minimum,
therefore, theories of responsibility that ignore the role of unconscious factors supported by the
empirical data proceed at their own peril (King & Carruthers 2012). The crucial area of inquiry
for those interested in the relationship between consciousness and responsibility concerns the
relative strength of that relationship and the extent to which it should be impacted by findings in
the empirical sciences.

The paper is here.

Tuesday, October 31, 2017

Does Your Gut Always Steer You Right?

Elizabeth Bernstein
The Wall Street Journal
Originally published October 9, 2017

Here is an excerpt:

When should you trust your gut? Consult your gut for complex decisions.

These include important, but not life-or-death, choices such as what car to buy, where to move, which job offer to accept. Your conscious mind will have too much information to sort through, and there may not be one clear choice. For example, there’s a lot to consider when deciding on a new home: neighborhood (Close to work but not as fun? Farther away but nicer?), price, type of home (Condo or house?). Research shows that when people are given four choices of which car to buy or which apartment to rent—with slightly different characteristics to each—and then are distracted from consciously thinking about their decision, they make better choices. “Our conscious mind is not very good at having all these choices going on at once,” says Dr. Bargh. “When you let your mind work on this without paying conscious attention, you make a better decision.”

Using unconscious and conscious thought to make a decision is often best. And conscious thought should come first. An excellent way to do this is to make a list of the benefits and drawbacks of each choice you could make. We are trained in rational decision-making, so this will satisfy your conscious mind. And sometimes the list will be enough to show you a clear decision.

But if it isn’t, put it away and do something that absorbs your conscious mind. Go for a hike or run, walk on the beach, play chess, practice a musical instrument. (No vegging out in front of the TV; that’s too mind-numbing, experts say.) “Go into yourself without distractions from the outside, and your unconscious will keep working on the problem,” says Emeran Mayer, a gastroenterologist and neuroscientist and the author of “The Mind-Gut Connection” and a professor at UCLA’s David Geffen School of Medicine.

If the stakes are high, try to think rationally

Even if time is tight. For example, if your gut tells you to jump in front of a train to help someone who just fell on the tracks, that might be worth risking your life. If it’s telling you to jump in front of that train because you dropped your purse, it’s not. Your rational mind, not your gut, will know the difference, Dr. Bargh says.

The article is here.

Note: As usual, I don't agree with everything in this article.

Wednesday, August 30, 2017

Fat Shaming in the Doctor's Office Can Be Mentally and Physically Harmful

American Psychological Association
Press Release from August 3, 2017

Medical discrimination based on people’s size and negative stereotypes of overweight people can take a toll on people’s physical health and well-being, according to a review of recent research presented at the 125th Annual Convention of the American Psychological Association.

“Disrespectful treatment and medical fat shaming, in an attempt to motivate people to change their behavior, is stressful and can cause patients to delay health care seeking or avoid interacting with providers,” presenter Joan Chrisler, PhD, a professor of psychology at Connecticut College, said during a symposium titled “Weapons of Mass Distraction — Confronting Sizeism.”

Sizeism can also have an effect on how doctors medically treat patients, as overweight people are often excluded from medical research based on assumptions about their health status, Chrisler said, meaning the standard dosage for drugs may not be appropriate for larger body sizes. Recent studies have shown frequent under-dosing of overweight patients who were prescribed antibiotics and chemotherapy, she added.

“Recommending different treatments for patients with the same condition based on their weight is unethical and a form of malpractice,” Chrisler said. “Research has shown that doctors repeatedly advise weight loss for fat patients while recommending CAT scans, blood work or physical therapy for other, average weight patients.”

In some cases, providers might not take fat patients’ complaints seriously or might assume that their weight is the cause of any symptoms they experience, Chrisler added. “Thus, they could jump to conclusions or fail to run appropriate tests, which results in misdiagnosis,” she said.

The pressor is here.

Tuesday, August 29, 2017

The Influence of (Dis)belief in Free Will on Immoral Behavior

Caspar, E. A., Vuillaume, L., Magalhães De Saldanha da Gama, P. A. and Cleeremans, A.
Frontiers in Psychology, 17 January 2017

Abstract

One of the hallmarks of human existence is that we all hold beliefs that determine how we act. Amongst such beliefs, the idea that we are endowed with free will appears to be linked to prosocial behaviors, probably by enhancing the feeling of responsibility of individuals over their own actions. However, such effects appear to be more complex that one might have initially thought. Here, we aimed at exploring how induced disbeliefs in free will impact the sense of agency over the consequences of one’s own actions in a paradigm that engages morality. To do so, we asked participants to choose to inflict or to refrain from inflicting an electric shock to another participant in exchange of a small financial benefit. Our results show that participants who were primed with a text defending neural determinism – the idea that humans are a mere bunch of neurons guided by their biology – administered fewer shocks and were less vindictive toward the other participant. Importantly, this finding only held for female participants. These results show the complex interaction between gender, (dis)beliefs in free will and moral behavior.

From the Conclusion:

To conclude, we observed that disbelief in free will had a positive impact on the morality of decisions toward others. The present work extends previous research by showing that additional factors, such as gender, could influence the impact of (dis)belief in free will on prosocial and antisocial behaviors. Our results also showed that previous results relative to the (moral) context underlying the paradigm in use are not always replicated.

The research is here.

Friday, August 11, 2017

The real problem (of consciousness)

Anil K Seth
Aeon.com
Originally posted November 2, 2016

Here is an excerpt:

The classical view of perception is that the brain processes sensory information in a bottom-up or ‘outside-in’ direction: sensory signals enter through receptors (for example, the retina) and then progress deeper into the brain, with each stage recruiting increasingly sophisticated and abstract processing. In this view, the perceptual ‘heavy-lifting’ is done by these bottom-up connections. The Helmholtzian view inverts this framework, proposing that signals flowing into the brain from the outside world convey only prediction errors – the differences between what the brain expects and what it receives. Perceptual content is carried by perceptual predictions flowing in the opposite (top-down) direction, from deep inside the brain out towards the sensory surfaces. Perception involves the minimisation of prediction error simultaneously across many levels of processing within the brain’s sensory systems, by continuously updating the brain’s predictions. In this view, which is often called ‘predictive coding’ or ‘predictive processing’, perception is a controlled hallucination, in which the brain’s hypotheses are continually reined in by sensory signals arriving from the world and the body. ‘A fantasy that coincides with reality,’ as the psychologist Chris Frith eloquently put it in Making Up the Mind (2007).

Armed with this theory of perception, we can return to consciousness. Now, instead of asking which brain regions correlate with conscious (versus unconscious) perception, we can ask: which aspects of predictive perception go along with consciousness? A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. In 2001, Alvaro Pascual-Leone and Vincent Walsh at Harvard Medical School asked people to report the perceived direction of movement of clouds of drifting dots (so-called ‘random dot kinematograms’). They used TMS to specifically interrupt top-down signalling across the visual cortex, and they found that this abolished conscious perception of the motion, even though bottom-up signals were left intact.

The article is here.

Tuesday, July 11, 2017

Men Can Be So Hormonal

Therese Huston
The New York Times
Originally posted June 24, 2017

Here is an excerpt:

People don’t like to believe that they’re average. But compared with women, men tend to think they’re much better than average.

If you feel your judgment is right, are you interested in how others see the problem? Probably not. Nicholas D. Wright, a neuroscientist at the University of Birmingham in Britain, studies how fluctuations in testosterone shape one’s willingness to collaborate.  Most testosterone researchers study men, for obvious reasons, but Dr. Wright and his team focus on women. They asked women to perform a challenging perceptual task: detecting where a fuzzy pattern had appeared on a busy computer screen. When women took oral testosterone, they were more likely to ignore the input of others, compared with women in the placebo condition. Amped up on testosterone, they relied more heavily on their own judgment, even when they were wrong.

The findings of the latest study, which have been presented at conferences and will be published in Psychological Science in January, offer more reasons to worry about testosterone supplements.

The article is here.

Sunday, June 25, 2017

Managing for Academic Integrity in Higher Education: Insights From Behavioral Ethics

Sheldene Simola
Scholarship of Teaching and Learning in Psychology
Vol 3(1), Mar 2017, 43-57.

Despite the plethora of research on factors associated with academic dishonesty and ways of averting it, such dishonesty remains a significant concern. There is a need to identify overarching frameworks through which academic dishonesty might be understood, which might also suggest novel yet research-supported practical insights aimed at prevention. Hence, this article draws upon the burgeoning field of behavioral ethics to highlight a dual processing framework on academic dishonesty and to provide additional and sometimes counterintuitive practical insights into preventing this predicament. Six themes from within behavioral ethics are elaborated. These indicate the roles of reflective, conscious deliberation in academic (dis)honesty, as well as reflexive, nonconscious judgment; the roles of rationality and emotionality; and the ways in which conscious and nonconscious situational cues can cause individual moral identity or moral standards to become more or less salient to, and therefore influential in, decision-making. Practical insights and directions for future research are provided.

The article is here.