Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Brain. Show all posts
Showing posts with label Brain. Show all posts

Thursday, August 15, 2019

Elon Musk unveils Neuralink’s plans for brain-reading ‘threads’ and a robot to insert them

Elizabeth Lopatto
www.theverge.com
Originally published July 16, 2019

Here is an excerpt:

“It’s not going to be suddenly Neuralink will have this neural lace and start taking over people’s brains,” Musk said. “Ultimately” he wants “to achieve a symbiosis with artificial intelligence.” And that even in a “benign scenario,” humans would be “left behind.” Hence, he wants to create technology that allows a “merging with AI.” He later added “we are a brain in a vat, and that vat is our skull,” and so the goal is to read neural spikes from that brain.

The first paralyzed person to receive a brain implant that allowed him to control a computer cursor was Matthew Nagle. In 2006, Nagle, who had a spinal cord injury, played Pong using only his mind; the basic movement required took him only four days to master, he told The New York Times. Since then, paralyzed people with brain implants have also brought objects into focus and moved robotic arms in labs, as part of scientific research. The system Nagle and others have used is called BrainGate and was developed initially at Brown University.

“Neuralink didn’t come out of nowhere, there’s a long history of academic research here,” Hodak said at the presentation on Tuesday. “We’re, in the greatest sense, building on the shoulders of giants.” However, none of the existing technologies fit Neuralink’s goal of directly reading neural spikes in a minimally invasive way.

The system presented today, if it’s functional, may be a substantial advance over older technology. BrainGate relied on the Utah Array, a series of stiff needles that allows for up to 128 electrode channels. Not only is that fewer channels than Neuralink is promising — meaning less data from the brain is being picked up — it’s also stiffer than Neuralink’s threads. That’s a problem for long-term functionality: the brain shifts in the skull but the needles of the array don’t, leading to damage. The thin polymers Neuralink is using may solve that problem.

The info is here.

Friday, June 21, 2019

It's not biology bro: Torture and the Misuse of Science

Shane O'Mara and John Schiemann
PsyArXiv Preprints
Last edited on December 24, 2018

Abstract

Contrary to the (in)famous line in the film Zero Dark Thirty, the CIA's torture program was not based on biology or any other science. Instead, the Bush administration and the CIA decided to use coercion immediately after the 9/11 terrorist attacks and then veneered the program's justification with a patina of pseudoscience, ignoring the actual biology of torturing human brains. We reconstruct the Bush administration’s decision-making process from released government documents, independent investigations, journalistic accounts, and memoirs to establish that the policy decision to use torture took place in the immediate aftermath of the 9/11 attacks without any investigation into its efficacy. We then present the pseudo-scientific model of torture sold to the CIA based on a loose amalgamation of methods from the old KUBARK manual, reverse-engineering of SERE training techniques, and learned helplessness theory, show why this ad hoc model amounted to pseudoscience, and then catalog what the actual science of torturing human brains – available in 2001 – reveals about the practice. We conclude with a discussion of how process of policy-making might incorporate countervailing evidence to ensure that policy problems are forestalled, via the concept of an evidence-based policy brake, which is deliberately instituted to prevent a policy going forward that is contrary to law, ethics and evidence.

The info is here.

Thursday, May 2, 2019

Part-revived pig brains raise slew of ethical quandaries

Nita A. Farahany, Henry T. Greely & Charles M. Giattino
Nature
Originally published April 17, 2019

Scientists have restored and preserved some cellular activities and structures in the brains of pigs that had been decapitated for food production four hours before. The researchers saw circulation in major arteries and small blood vessels, metabolism and responsiveness to drugs at the cellular level and even spontaneous synaptic activity in neurons, among other things. The team formulated a unique solution and circulated it through the isolated brains using a network of pumps and filters called BrainEx. The solution was cell-free, did not coagulate and contained a haemoglobin-based oxygen carrier and a wide range of pharmacological agents.

The remarkable study, published in this week’s Nature, offers the promise of an animal or even human whole-brain model in which many cellular functions are intact. At present, cells from animal and human brains can be sustained in culture for weeks, but only so much can be gleaned from isolated cells. Tissue slices can provide snapshots of local structural organization, yet they are woefully inadequate for questions about function and global connectivity, because much of the 3D structure is lost during tissue preparation.

The work also raises a host of ethical issues. There was no evidence of any global electrical activity — the kind of higher-order brain functioning associated with consciousness. Nor was there any sign of the capacity to perceive the environment and experience sensations. Even so, because of the possibilities it opens up, the BrainEx study highlights potential limitations in the current regulations for animals used in research.

The info is here.

Sunday, April 14, 2019

Scientists Grew a Mini-Brain in a Dish, And It Connected to a Spinal Cord by Itself

Carly Cassella
www.sciencealert.com
Originally posted March 20, 2019

Lab-growing the most complex structure in the known Universe may sound like an impossible task, but that hasn't stopped scientists from trying.

After years of work, researchers in the UK have now cultivated one of the most sophisticated miniature brains-in-a-dish yet, and it actually managed to behave in a slightly freaky fashion.

The grey blob was composed of about two million organised neurons, which is similar to the human foetal brain at 12 to 13 weeks. At this stage, this so-called 'brain organoid' is not complex enough to have any thoughts, feelings, or consciousness - but that doesn't make it entirely inert.

When placed next to a piece of mouse spinal cord and a piece of mouse muscle tissue, this disembodied, pea-sized blob of human brain cells sent out long, probing tendrils to check out its new neighbours.

Using long-term live microscopy, researchers were able to watch as the mini-brain spontaneously connected itself to the nearby spinal cord and muscle tissue.

The info is here.

Monday, January 7, 2019

The Boundary Between Our Bodies and Our Tech

Kevin Lincoln
Pacific Standard
Originally published November 8, 2018

Here is an excerpt:

"They argued that, essentially, the mind and the self are extended to those devices that help us perform what we ordinarily think of as our cognitive tasks," Lynch says. This can include items as seemingly banal and analog as a piece of paper and a pen, which help us remember, a duty otherwise performed by the brain. According to this philosophy, the shopping list, for example, becomes part of our memory, the mind spilling out beyond the confines of our skull to encompass anything that helps it think.

"Now if that thought is right, it's pretty clear that our minds have become even more radically extended than ever before," Lynch says. "The idea that our self is expanding through our phones is plausible, and that's because our phones, and our digital devices generally—our smartwatches, our iPads—all these things have become a really intimate part of how we go about our daily lives. Intimate in the sense in which they're not only on our body, but we sleep with them, we wake up with them, and the air we breathe is filled, in both a literal and figurative sense, with the trails of ones and zeros that these devices leave behind."

This gets at one of the essential differences between a smartphone and a piece of paper, which is that our relationship with our phones is reciprocal: We not only put information into the device, we also receive information from it, and, in that sense, it shapes our lives far more actively than would, say, a shopping list. The shopping list isn't suggesting to us, based on algorithmic responses to our past and current shopping behavior, what we should buy; the phone is.

The info is here.

Monday, December 31, 2018

How free is our will?

Kevin Mitchell
Wiring The Brain Blog
Originally posted November 25, 2018

Here is an excerpt:

Being free – to my mind at least – doesn’t mean making decisions for no reasons, it means making them for your reasons. Indeed, I would argue that this is exactly what is required to allow any kind of continuity of the self. If you were just doing things on a whim all the time, what would it mean to be you? We accrue our habits and beliefs and intentions and goals over our lifetime, and they collectively affect how actions are suggested and evaluated.

Whether we are conscious of that is another question. Most of our reasons for doing things are tacit and implicit – they’ve been wired into our nervous systems without our even being aware of them. But they’re still part of us ­– you could argue they’re precisely what makes us us. Even if most of that decision-making happens subconsciously, it’s still you doing it.

Ultimately, whether you think you have free will or not may depend less on the definition of “free will” and more on the definition of “you”. If you identify just as the president – the decider-in-chief – then maybe you’ll be dismayed at how little control you seem to have or how rarely you really exercise it. (Not never, but maybe less often than your ego might like to think).

But that brings us back to a very dualist position, identifying you with only your conscious mind, as if it can somehow be separated from all the underlying workings of your brain. Perhaps it’s more appropriate to think that you really comprise all of the machinery of government, even the bits that the president never sees or is not even aware exists.

The info is here.

Sunday, December 23, 2018

Fresh urgency in mapping out ethics of brain organoid research

Julian Koplin and Julian Savulescu
The Conversation
Originally published November 20, 2018

Here is an excerpt:

But brain organoid research also raises serious ethical questions. The main concern is that brain organoids could one day attain consciousness – an issue that has just been brought to the fore by a new scientific breakthrough.

Researchers from the University of California, San Diego, recently published the creation of brain organoids that spontaneously produce brain waves resembling those found in premature infants. Although this electrical activity does not necessarily mean these organoids are conscious, it does show that we need to think through the ethics sooner rather than later.

Regulatory gaps

Stem cell research is already subject to careful regulation. However, existing regulatory frameworks have not yet caught up with the unique set of ethical concerns associated with brain organoids.

Guidelines like the National Health and Medical Research Council’s National Statement on Ethical Conduct in Human Research protect the interests of those who donate human biological material to research (and also address a host of other issues). But they do not consider whether brain organoids themselves could acquire morally relevant interests.

This gap has not gone unnoticed. A growing number of commentators argue that brain organoid research should face restrictions beyond those that apply to stem cell research more generally. Unfortunately, little progress has been made on identifying what form these restrictions should take.

The info is here.

Wednesday, October 31, 2018

We’re Worrying About the Wrong Kind of AI

Mark Buchanan
Bloomberg.com
Originally posted June 11, 2018

No computer has yet shown features of true human-level artificial intelligence much less conscious awareness. Some experts think we won't see it for a long time to come. And yet academics, ethicists, developers and policy-makers are already thinking a lot about the day when computers become conscious; not to mention worries about more primitive AI being used in defense projects.

Now consider that biologists have been learning to grow functioning “mini brains” or “brain organoids” from real human cells, and progress has been so fast that researchers are actually worrying about what to do if a piece of tissue in a lab dish suddenly shows signs of having conscious states or reasoning abilities. While we are busy focusing on computer intelligence, AI may arrive in living form first, and bring with it a host of unprecedented ethical challenges.

In the 1930s, the British mathematician Alan Turing famously set out the mathematical foundations for digital computing. It's less well known that Turing later pioneered the mathematical theory of morphogenesis, or how organisms develop from single cells into complex multicellular beings through a sequence of controlled transformations making increasingly intricate structures. Morphogenesis is also a computation, only with a genetic program controlling not just 0s and 1s, but complex chemistry, physics and cellular geometry.

Following Turing's thinking, biologists have learned to control the computation of biological development so accurately that lab growth of artificial organs, even brains, is no longer science fiction.

The information is here.

Thursday, September 20, 2018

Man-made human 'minibrains' spark debate on ethics and morality

Carolyn Y. Johnson
www.iol.za
Originally posted September 3, 2018

Here is an excerpt:

Five years ago, an ethical debate about organoids seemed to many scientists to be premature. The organoids were exciting because they were similar to the developing brain, and yet they were incredibly rudimentary. They were constrained in how big they could get before cells in the core started dying, because they weren't suffused with blood vessels or supplied with nutrients and oxygen by a beating heart. They lacked key cell types.

Still, there was something different about brain organoids compared with routine biomedical research. Song recalled that one of the amazing but also unsettling things about the early organoids was that they weren't as targeted to develop into specific regions of the brain, so it was possible to accidentally get retinal cells.

"It's difficult to see the eye in a dish," Song said.

Now, researchers are succeeding at keeping organoids alive for longer periods of time. At a talk, Hyun recalled one researcher joking that the lab had sung "Happy Birthday" to an organoid when it was a year old. Some researchers are implanting organoids into rodent brains, where they can stay alive longer and grow more mature. Others are building multiple organoids representing different parts of the brain, such as the hippocampus, which is involved in memory, or the cerebral cortex - the seat of cognition - and fusing them together into larger "assembloids."

Even as scientists express scepticism that brain organoids will ever come close to sentience, they're the ones calling for a broad discussion, and perhaps more oversight. The questions range from the practical to the fantastical. Should researchers make sure that people who donate their cells for organoid research are informed that they could be used to make a tiny replica of parts of their brain? If organoids became sophisticated enough, should they be granted greater protections, like the rules that govern animal research? Without a consensus on what consciousness or pain would even look like in the brain, how will scientists know when they're nearing the limit?

The info is here.

Monday, September 10, 2018

Cognitive Biases Tricking Your Brain

Ben Yagoda
The Atlantic
September 2018 Issue

Here is an excerpt:

Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves. Instead, it has been devoted to changing behavior, in the form of incentives or “nudges.” For example, while present bias has so far proved intractable, employers have been able to nudge employees into contributing to retirement plans by making saving the default option; you have to actively take steps in order to not participate. That is, laziness or inertia can be more powerful than bias. Procedures can also be organized in a way that dissuades or prevents people from acting on biased thoughts. A well-known example: the checklists for doctors and nurses put forward by Atul Gawande in his book The Checklist Manifesto.

Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative. These experiments are based on the reactions and responses of randomly chosen subjects, many of them college undergraduates: people, that is, who care about the $20 they are being paid to participate, not about modifying or even learning about their behavior and thinking. But what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?

The info is here.

Friday, May 25, 2018

What does it take to be a brain disorder?

Anneli Jefferson
Synthese (2018).
https://doi.org/10.1007/s11229-018-1784-x

Abstract

In this paper, I address the question whether mental disorders should be understood to be brain disorders and what conditions need to be met for a disorder to be rightly described as a brain disorder. I defend the view that mental disorders are autonomous and that a condition can be a mental disorder without at the same time being a brain disorder. I then show the consequences of this view. The most important of these is that brain differences underlying mental disorders derive their status as disordered from the fact that they realize mental dysfunction and are therefore non-autonomous or dependent on the level of the mental. I defend this view of brain disorders against the objection that only conditions whose pathological character can be identified independently of the mental level of description count as brain disorders. The understanding of brain disorders I propose requires a certain amount of conceptual revision and is at odds with approaches which take the notion of brain disorder to be fundamental or look to neuroscience to provide us with a purely physiological understanding of mental illness. It also entails a pluralistic understanding of psychiatric illness, according to which a condition can be both a mental disorder and a brain disorder.

The research is here.

Monday, April 16, 2018

Psychotherapy Is 'The' Biological Treatment

Robert Berezin
Medscape.com
Originally posted March 16, 2018

Neuroscience surprisingly teaches us that not only is psychotherapy purely biological, but it is the only real biological treatment. It addresses the brain in the way it actually develops, matures, and operates. It follows the principles of evolutionary adaptation. It is consonant with genetics. And it specifically heals the problematic adaptations of the brain in precisely the ways that they evolved in the first place. Psychotherapy deactivates maladaptive brain mappings and fosters new and constructive pathways. Let me explain.

The operations of the brain are purely biological. The brain maps our experiences and memories through the linking of trillions of neuronal connections. These interconnected webs create larger circuits that map all throughout the architecture of the cortex. This generates high-level symbolic neuronal maps that take form as images in our consciousness. The play of consciousness is the highest level of symbolic form. It is a living theater of "image-ination," a representational world that consists of a cast of characters who relate together by feeling as well as scenarios, plots, set designs, and landscape.

As we adapt to our environment, the brain maps our emotional experience through cortical memory. This starts very early in life. If a baby is startled by a loud noise, his arms and legs will flail. His heart pumps adrenaline, and he cries. This "startle" maps a fight-or-flight response in his cortex, which is mapped through serotonin and cortisol. The baby is restored by his mother's holding. Her responsive repair once again re-establishes and maintains his well-being, which is mapped through oxytocin. These ongoing formative experiences of life are mapped into memory in precisely these two basic ways.

The article is here.

Saturday, April 14, 2018

The AI Cargo Cult: The Myth of a Superhuman AI

Kevin Kelly
www.wired.com
Originally published April 25, 2017

Here is an excerpt:

The most common misconception about artificial intelligence begins with the common misconception about natural intelligence. This misconception is that intelligence is a single dimension. Most technical people tend to graph intelligence the way Nick Bostrom does in his book, Superintelligence — as a literal, single-dimension, linear graph of increasing amplitude. At one end is the low intelligence of, say, a small animal; at the other end is the high intelligence, of, say, a genius—almost as if intelligence were a sound level in decibels. Of course, it is then very easy to imagine the extension so that the loudness of intelligence continues to grow, eventually to exceed our own high intelligence and become a super-loud intelligence — a roar! — way beyond us, and maybe even off the chart.

This model is topologically equivalent to a ladder, so that each rung of intelligence is a step higher than the one before. Inferior animals are situated on lower rungs below us, while higher-level intelligence AIs will inevitably overstep us onto higher rungs. Time scales of when it happens are not important; what is important is the ranking—the metric of increasing intelligence.

The problem with this model is that it is mythical, like the ladder of evolution. The pre-Darwinian view of the natural world supposed a ladder of being, with inferior animals residing on rungs below human. Even post-Darwin, a very common notion is the “ladder” of evolution, with fish evolving into reptiles, then up a step into mammals, up into primates, into humans, each one a little more evolved (and of course smarter) than the one before it. So the ladder of intelligence parallels the ladder of existence. But both of these models supply a thoroughly unscientific view.

The information is here.

Friday, March 9, 2018

The brain as artificial intelligence: prospecting the frontiers of neuroscience

Fuller, S.
AI & Soc (2018).
https://doi.org/10.1007/s00146-018-0820-1

Abstract

This article explores the proposition that the brain, normally seen as an organ of the human body, should be understood as a biologically based form of artificial intelligence, in the course of which the case is made for a new kind of ‘brain exceptionalism’. After noting that such a view was generally assumed by the founders of AI in the 1950s, the argument proceeds by drawing on the distinction between science—in this case neuroscience—adopting a ‘telescopic’ or a ‘microscopic’ orientation to reality, depending on how it regards its characteristic investigative technologies. The paper concludes by recommending a ‘microscopic’ yet non-reductionist research agenda for neuroscience, in which the brain is seen as an underutilised organ whose energy efficiency is likely to outstrip that of the most powerful supercomputers for the foreseeable future.

The article is here.

Thursday, January 25, 2018

Neurotechnology, Elon Musk and the goal of human enhancement

Sarah Marsh
The Guardian
Originally published January 1, 2018

Here is an excerpt:

“I hope more resources will be put into supporting this very promising area of research. Brain Computer Interfaces (BCIs) are not only an invaluable tool for people with disabilities, but they could be a fundamental tool for going beyond human limits, hence improving everyone’s life.”

He notes, however, that one of the biggest challenges with this technology is that first we need to better understand how the human brain works before deciding where and how to apply BCI. “This is why many agencies have been investing in basic neuroscience research – for example, the Brain initiative in the US and the Human Brain Project in the EU.”

Whenever there is talk of enhancing humans, moral questions remain – particularly around where the human ends and the machine begins. “In my opinion, one way to overcome these ethical concerns is to let humans decide whether they want to use a BCI to augment their capabilities,” Valeriani says.

“Neuroethicists are working to give advice to policymakers about what should be regulated. I am quite confident that, in the future, we will be more open to the possibility of using BCIs if such systems provide a clear and tangible advantage to our lives.”

The article is here.

Thursday, January 18, 2018

Humans 2.0: meet the entrepreneur who wants to put a chip in your brain

Zofia Niemtus
The Guardian
Originally posted December 14, 2017

Here are two exerpts:

The shape that this technology will take is still unknown. Johnson uses the term “brain chip”, but the developments taking place in neuroprosthesis are working towards less invasive procedures than opening up your skull and cramming a bit of hardware in; injectable sensors are one possibility.

It may sound far-fetched, but Johnson has a track record of getting things done. Within his first semester at university, he’d set up a profitable business selling mobile phones to fellow students. By age 30, he’d founded online payment company Braintree, which he sold six years later to PayPal for $800m. He used $100m of the proceeds to create Kernel in 2016 – it now employs more than 30 people.

(cut)

“And yet, the brain is everything we are, everything we do, and everything we aspire to be. It seemed obvious to me that the brain is both the most consequential variable in the world and also our biggest blind spot as a species. I decided that if the root problems of humanity begin in the human mind, let’s change our minds.”

The article is here.

Thursday, January 11, 2018

Is Blended Intelligence the Next Stage of Human Evolution?

Richard Yonck
TED Talk
Published December 8, 2017

What is the future of intelligence? Humanity is still an extremely young species and yet our prodigious intellects have allowed us to achieve all manner of amazing accomplishments in our relatively short time on this planet, most especially during the past couple of centuries or so. Yet, it would be short-sighted of us to assume our species has reached the end of our journey, having become as intelligent as we will ever be. On the contrary, it seems far more likely that if we should survive our “infancy," there is probably much more time ahead of us than there is looking back. If that’s the case, then our descendants of only a few thousand years from now will probably be very, very different from you and I.


Tuesday, January 2, 2018

The Neuroscience of Changing Your Mind

 Bret Stetka
Scientific American
Originally published on December 7, 2017

Here are two excerpts:

Scientists have long accepted that our ability to abruptly stop or modify a planned behavior is controlled via a single region within the brain’s prefrontal cortex, an area involved in planning and other higher mental functions. By studying other parts of the brain in both humans and monkeys, however, a team from Johns Hopkins University has now concluded that last-minute decision-making is a lot more complicated than previously known, involving complex neural coordination among multiple brain areas. The revelations may help scientists unravel certain aspects of addictive behaviors and understand why accidents like falls grow increasingly common as we age, according to the Johns Hopkins team.

(cut)

Tracking these eye movements and neural action let the researchers resolve the very confusing question of what brain areas are involved in these split-second decisions, says Vanderbilt University neuroscientist Jeffrey Schall, who was not involved in the research. “By combining human functional brain imaging with nonhuman primate neurophysiology, [the investigators] weave together threads of research that have too long been separate strands,” he says. “If we can understand how the brain stops or prevents an action, we may gain ability to enhance that stopping process to afford individuals more control over their choices.”

The article is here.

Friday, August 11, 2017

The real problem (of consciousness)

Anil K Seth
Aeon.com
Originally posted November 2, 2016

Here is an excerpt:

The classical view of perception is that the brain processes sensory information in a bottom-up or ‘outside-in’ direction: sensory signals enter through receptors (for example, the retina) and then progress deeper into the brain, with each stage recruiting increasingly sophisticated and abstract processing. In this view, the perceptual ‘heavy-lifting’ is done by these bottom-up connections. The Helmholtzian view inverts this framework, proposing that signals flowing into the brain from the outside world convey only prediction errors – the differences between what the brain expects and what it receives. Perceptual content is carried by perceptual predictions flowing in the opposite (top-down) direction, from deep inside the brain out towards the sensory surfaces. Perception involves the minimisation of prediction error simultaneously across many levels of processing within the brain’s sensory systems, by continuously updating the brain’s predictions. In this view, which is often called ‘predictive coding’ or ‘predictive processing’, perception is a controlled hallucination, in which the brain’s hypotheses are continually reined in by sensory signals arriving from the world and the body. ‘A fantasy that coincides with reality,’ as the psychologist Chris Frith eloquently put it in Making Up the Mind (2007).

Armed with this theory of perception, we can return to consciousness. Now, instead of asking which brain regions correlate with conscious (versus unconscious) perception, we can ask: which aspects of predictive perception go along with consciousness? A number of experiments are now indicating that consciousness depends more on perceptual predictions, than on prediction errors. In 2001, Alvaro Pascual-Leone and Vincent Walsh at Harvard Medical School asked people to report the perceived direction of movement of clouds of drifting dots (so-called ‘random dot kinematograms’). They used TMS to specifically interrupt top-down signalling across the visual cortex, and they found that this abolished conscious perception of the motion, even though bottom-up signals were left intact.

The article is here.

Tuesday, April 25, 2017

Artificial synapse on a chip will help mobile devices learn like the human brain

Luke Dormehl
Digital Trends
Originally posted April 6, 2017

Brain-inspired deep learning neural networks have been behind many of the biggest breakthroughs in artificial intelligence seen over the past 10 years.

But a new research project from the National Center for Scientific Research (CNRS), the University of Bordeaux, and Norwegian information technology company Evry could take that these breakthroughs to next level — thanks to the creation of an artificial synapse on a chip.

“There are many breakthroughs from software companies that use algorithms based on artificial neural networks for pattern recognition,” Dr. Vincent Garcia, a CNRS research scientist who worked on the project, told Digital Trends. “However, as these algorithms are simulated on standard processors they require a lot of power. Developing artificial neural networks directly on a chip would make this kind of tasks available to everyone, and much more power efficient.”

Synapses in the brain function as the connections between neurons. Learning takes place when these connections are reinforced, and improved when synapses are stimulated. The newly developed electronic devices (called “memristors”) emulate the behavior of these synapses, by way of a variable resistance that depends on the history of electronic excitations they receive.

The article is here.