Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Neural Network. Show all posts
Showing posts with label Neural Network. Show all posts

Saturday, October 29, 2022

Sleep loss leads to the withdrawal of human helping across individuals, groups, and large-scale societies

Ben Simon E, Vallat R, Rossi A, Walker MP (2022) 
PLoS Biol 20(8): e3001733.
https://doi.org/10.1371/journal.pbio.3001733

Abstract

Humans help each other. This fundamental feature of homo sapiens has been one of the most powerful forces sculpting the advent of modern civilizations. But what determines whether humans choose to help one another? Across 3 replicating studies, here, we demonstrate that sleep loss represents one previously unrecognized factor dictating whether humans choose to help each other, observed at 3 different scales (within individuals, across individuals, and across societies). First, at an individual level, 1 night of sleep loss triggers the withdrawal of help from one individual to another. Moreover, fMRI findings revealed that the withdrawal of human helping is associated with deactivation of key nodes within the social cognition brain network that facilitates prosociality. Second, at a group level, ecological night-to-night reductions in sleep across several nights predict corresponding next-day reductions in the choice to help others during day-to-day interactions. Third, at a large-scale national level, we demonstrate that 1 h of lost sleep opportunity, inflicted by the transition to Daylight Saving Time, reduces real-world altruistic helping through the act of donation giving, established through the analysis of over 3 million charitable donations. Therefore, inadequate sleep represents a significant influential force determining whether humans choose to help one another, observable across micro- and macroscopic levels of civilized interaction. The implications of this effect may be non-trivial when considering the essentiality of human helping in the maintenance of cooperative, civil society, combined with the reported decline in sufficient sleep in many first-world nations.

From the Discussion section

Taken together, findings across all 3 studies establish insufficient sleep (both quantity and quality) as a degrading force influencing whether or not humans wish to help each other, and do indeed, choose to help each other (through real-world altruistic acts), observable at 3 different societal scales: within individuals, across individuals, and at a nationwide level.

Study 1 established not only the causal impact of sleep loss on the basic desire to help another human being, but further characterised the central underlying brain mechanism associated with this altered phenotype of diminished helping. Specifically, sleep loss significantly and selectively reduced activity throughout key nodes of the social cognition brain network (see Fig 1B) normally associated with prosociality, including perspective taking of others’ mental state, their emotions, and their personal needs. Therefore, impairment of this neural system caused by a lack of sleep represents one novel pathway explaining the associated withdrawal of helping desire and the decisional act to offer such help.

Tuesday, December 1, 2020

Using Machine Learning to Generate Novel Hypotheses: Increasing Optimism About COVID-19 Makes People Less Willing to Justify Unethical Behaviors

Sheetal A, Feng Z, Savani K. 
Psychological Science. 2020;31(10):
1222-1235. 
doi:10.1177/0956797620959594

Abstract

How can we nudge people to not engage in unethical behaviors, such as hoarding and violating social-distancing guidelines, during the COVID-19 pandemic? Because past research on antecedents of unethical behavior has not provided a clear answer, we turned to machine learning to generate novel hypotheses. We trained a deep-learning model to predict whether or not World Values Survey respondents perceived unethical behaviors as justifiable, on the basis of their responses to 708 other items. The model identified optimism about the future of humanity as one of the top predictors of unethicality. A preregistered correlational study (N = 218 U.S. residents) conceptually replicated this finding. A preregistered experiment (N = 294 U.S. residents) provided causal support: Participants who read a scenario conveying optimism about the COVID-19 pandemic were less willing to justify hoarding and violating social-distancing guidelines than participants who read a scenario conveying pessimism. The findings suggest that optimism can help reduce unethicality, and they document the utility of machine-learning methods for generating novel hypotheses.

Here is how the research article begins:

Unethical behaviors can have substantial consequences in times of crisis. For example, in the midst of the COVID-19 pandemic, many people hoarded face masks and hand sanitizers; this hoarding deprived those who needed protective supplies most (e.g., medical workers and the elderly) and, therefore, put them at risk. Despite escalating deaths, more than 50,000 people were caught violating quarantine orders in Italy, putting themselves and others at risk. Governments covered up the scale of the pandemic in that country, thereby allowing the infection to spread in an uncontrolled manner. Thus, understanding antecedents of unethical behavior and identifying nudges to reduce unethical behaviors are particularly important in times of crisis.

Here is part of the Discussion

We formulated a novel hypothesis—that optimism reduces unethicality—on the basis of the deep-learning model’s finding that whether people think that the future of humanity is bleak or bright is a strong predictor of unethicality. This variable was not flagged as a top predictor either by the correlational analysis or by the lasso regression. Consistent with this idea, the results of a correlational study showed that people higher on dispositional optimism were less willing to engage in unethical behaviors. A following experiment found that increasing participants’ optimism about the COVID-19 epidemic reduced the extent to which they justified unethical behaviors related to the epidemic. The behavioral studies were conducted with U.S. American participants; thus, the cultural generalizability of the present findings is unclear. Future research needs to test whether optimism reduces unethical behavior in other cultural contexts.

Sunday, April 5, 2020

Why your brain is not a computer

Matthew Cobb
theguardian.com
Originally posted 27 Feb 20

Here is an excerpt:

The processing of neural codes is generally seen as a series of linear steps – like a line of dominoes falling one after another. The brain, however, consists of highly complex neural networks that are interconnected, and which are linked to the outside world to effect action. Focusing on sets of sensory and processing neurons without linking these networks to the behaviour of the animal misses the point of all that processing.

By viewing the brain as a computer that passively responds to inputs and processes data, we forget that it is an active organ, part of a body that is intervening in the world, and which has an evolutionary past that has shaped its structure and function. This view of the brain has been outlined by the Hungarian neuroscientist György Buzsáki in his recent book The Brain from Inside Out. According to Buzsáki, the brain is not simply passively absorbing stimuli and representing them through a neural code, but rather is actively searching through alternative possibilities to test various options. His conclusion – following scientists going back to the 19th century – is that the brain does not represent information: it constructs it.

The metaphors of neuroscience – computers, coding, wiring diagrams and so on – are inevitably partial. That is the nature of metaphors, which have been intensely studied by philosophers of science and by scientists, as they seem to be so central to the way scientists think. But metaphors are also rich and allow insight and discovery. There will come a point when the understanding they allow will be outweighed by the limits they impose, but in the case of computational and representational metaphors of the brain, there is no agreement that such a moment has arrived. From a historical point of view, the very fact that this debate is taking place suggests that we may indeed be approaching the end of the computational metaphor. What is not clear, however, is what would replace it.

Scientists often get excited when they realise how their views have been shaped by the use of metaphor, and grasp that new analogies could alter how they understand their work, or even enable them to devise new experiments. Coming up with those new metaphors is challenging – most of those used in the past with regard to the brain have been related to new kinds of technology. This could imply that the appearance of new and insightful metaphors for the brain and how it functions hinges on future technological breakthroughs, on a par with hydraulic power, the telephone exchange or the computer. There is no sign of such a development; despite the latest buzzwords that zip about – blockchain, quantum supremacy (or quantum anything), nanotech and so on – it is unlikely that these fields will transform either technology or our view of what brains do.

The info is here.

Friday, April 27, 2018

The Mind-Expanding Ideas of Andy Clark

Larissa MacFarquhar
The New Yorker
Originally published April 2, 2018

Here is an excerpt:

Cognitive science addresses philosophical questions—What is a mind? What is the mind’s relationship to the body? How do we perceive and make sense of the outside world?—but through empirical research rather than through reasoning alone. Clark was drawn to it because he’s not the sort of philosopher who just stays in his office and contemplates; he likes to visit labs and think about experiments. He doesn’t conduct experiments himself; he sees his role as gathering ideas from different places and coming up with a larger theoretical framework in which they all fit together. In physics, there are both experimental and theoretical physicists, but there are fewer theoretical neuroscientists or psychologists—you have to do experiments, for the most part, or you can’t get a job. So in cognitive science this is a role that philosophers can play.

Most people, he realizes, tend to identify their selves with their conscious minds. That’s reasonable enough; after all, that is the self they know about. But there is so much more to cognition than that: the vast, silent cavern of underground mental machinery, with its tubes and synapses and electric impulses, so many unconscious systems and connections and tricks and deeply grooved pathways that form the pulsing substrate of the self. It is those primal mechanisms, the wiring and plumbing of cognition, that he has spent most of his career investigating. When you think about all that fundamental stuff—some ancient and shared with other mammals and distant ancestors, some idiosyncratic and new—consciousness can seem like a merely surface phenomenon, a user interface that obscures the real works below.

The article and audio file are here.

Thursday, October 12, 2017

New Theory Cracks Open the Black Box of Deep Learning

Natalie Wolchover
Quanta Magazine
Originally published September 21, 2017

Here is an excerpt:

In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.

Tishby’s findings have the AI community buzzing. “I believe that the information bottleneck idea could be very important in future deep neural network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said.

Some researchers remain skeptical that the theory fully accounts for the success of deep learning, but Kyle Cranmer, a particle physicist at New York University who uses machine learning to analyze particle collisions at the Large Hadron Collider, said that as a general principle of learning, it “somehow smells right.”

The article is here.

Thursday, November 17, 2016

Can Machines Become Moral?

Don Howard
Big Questions Online
Originally published October 23, 2016

Here is an excerpt:

There is an important lesson here, which applies with equal force to the claim that robots cannot comprehend emotion. It is that what can or cannot be done in the domain of artificial intelligence is always an empirical question, the answer to which will have to await the results of further research and development. Confident a priori assertions about what science and engineering cannot achieve have a history of turning out to be wrong, as with Auguste Comte’s bold claim in the 1830s that science could never reveal the internal chemical constitution of the sun and other heavenly bodies, a claim he made at just the time when scientists like Fraunhofer, Foucault, Kirchhoff, and Bunsen were pioneering the use of spectrographic analysis for precisely that task.

The article is here.

Saturday, July 30, 2011

Researchers Create The First Artificial Neural Network Out Of DNA


Medical News Today
Deborah Williams-Hedges
California Institute of Technology 

Artificial intelligence has been the inspiration for countless books and movies, as well as the aspiration of countless scientists and engineers. Researchers at the California Institute of Technology (Caltech) have now taken a major step toward creating artificial intelligence - not in a robot or a silicon chip, but in a test tube. The researchers are the first to have made an artificial neural network out of DNA, creating a circuit of interacting molecules that can recall memories based on incomplete patterns, just as a brain can.

"The brain is incredible," says Lulu Qian, a Caltech senior postdoctoral scholar in bioengineering and lead author on the paper describing this work, published in the July 21 issue of the journal Nature. "It allows us to recognize patterns of events, form memories, make decisions, and take actions. So we asked, instead of having a physically connected network of neural cells, can a soup of interacting molecules exhibit brainlike behavior?"

The answer, as the researchers show, is yes.

Consisting of four artificial neurons made from 112 distinct DNA strands, the researchers' neural network plays a mind-reading game in which it tries to identify a mystery scientist. The researchers "trained" the neural network to "know" four scientists, whose identities are each represented by a specific, unique set of answers to four yes-or-no questions, such as whether the scientist was British. 

After thinking of a scientist, a human player provides an incomplete subset of answers that partially identifies the scientist. The player then conveys those clues to the network by dropping DNA strands that correspond to those answers into the test tube. Communicating via fluorescent signals, the network then identifies which scientist the player has in mind. Or, the network can "say" that it has insufficient information to pick just one of the scientists in its memory or that the clues contradict what it has remembered. The researchers played this game with the network using 27 different ways of answering the questions (out of 81 total combinations), and it responded correctly each time.

This DNA-based neural network demonstrates the ability to take an incomplete pattern and figure out what it might represent - one of the brain's unique features. "What we are good at is recognizing things," says coauthor Jehoshua "Shuki" Bruck, the Gordon and Betty Moore Professor of Computation and Neural Systems and Electrical Engineering. "We can recognize things based on looking only at a subset of features." The DNA neural network does just that, albeit in a rudimentary way.

Biochemical systems with artificial intelligence - or at least some basic, decision-making capabilities - could have powerful applications in medicine, chemistry, and biological research, the researchers say. In the future, such systems could operate within cells, helping to answer fundamental biological questions or diagnose a disease. Biochemical processes that can intelligently respond to the presence of other molecules could allow engineers to produce increasingly complex chemicals or build new kinds of structures, molecule by molecule. 

Read the entire story here.

The original article in Nature is here.