Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Human. Show all posts
Showing posts with label Human. Show all posts

Thursday, March 16, 2023

Drowning in Debris: A Daughter Faces Her Mother’s Hoarding

Deborah Derrickson Kossmann
Psychotherapy Networker
March/April 2023

Here is an excerpt:

My job as a psychologist is to salvage things, to use the stories people tell me in therapy and help them understand themselves and others better. I make meaning out of the joy and wreckage of my own life, too. Sure, I could’ve just hired somebody to shovel all my mother’s mess into a dumpster, but I needed to be my family’s archaeologist, excavating and preserving what was beautiful and meaningful. My mother isn’t wrong to say that holding on to some things is important. Like her, I appreciate connections to the past. During the cleaning, I found photographs, jewelry passed down over generations, and my bronzed baby shoes. I treasure these things.

“Maybe I failed by not following anything the psychology books say to do with a hoarding client,” I tell my sister over the phone. “Sometimes I still feel like I wasn’t compassionate enough.”

“You handled it as best you could as her daughter,” my sister says. “You’re not her therapist.”

After six years, my mother has finally stopped saying she’s a “prisoner” at assisted living. She tells me she’s part of a “posse” of women who eat dinner together. My sister decorated her studio apartment beautifully, but the cluttering has begun again. Piles of magazines and newspapers sit in corners of her room. Sometimes, I feel the rage and despair these behaviors trigger in me. I still have nightmares where I drive to my mother’s house, open the door, and see only darkness, black and terrifying, like I’m looking into a deep cave. Then, I’m fleeing while trying to wipe feces off my arm. I wake up feeling sadness and shame, but I know it isn’t my own.

A few weeks ago, I pulled up in front of my mother’s building after taking her to the cardiologist. We turned toward each other and hugged goodbye. She opened the car door with some effort and determinedly waved off my help before grabbing the bag of books I’d brought for her.

“I can do it, Deborah,” she snapped. But after taking a few steps toward the building entrance, she turned around to look at me and smiled. “Thank you,” she said. “I really appreciate all you do for me.” She added, softly, “I know it’s a lot.”


The article is an important reminder that practicing psychologists cope with their own stressors, family dynamics, and unpleasant emotional experiences.  Psychologists are humans with families, value systems, emotions, beliefs, and shortcomings.

Sunday, November 6, 2022

‘Breakthrough’ finding shows how modern humans grow more brain cells than Neanderthals

Rodrigo Pérez Ortega
Science.org
Originally posted 8 SEP 22

We humans are proud of our big brains, which are responsible for our ability to plan ahead, communicate, and create. Inside our skulls, we pack, on average, 86 billion neurons—up to three times more than those of our primate cousins. For years, researchers have tried to figure out how we manage to develop so many brain cells. Now, they’ve come a step closer: A new study shows a single amino acid change in a metabolic gene helps our brains develop more neurons than other mammals—and more than our extinct cousins, the Neanderthals.

The finding “is really a breakthrough,” says Brigitte Malgrange, a developmental neurobiologist at the University of Liège who was not involved in the study. “A single amino acid change is really, really important and gives rise to incredible consequences regarding the brain.”

What makes our brain human has been the interest of neurobiologist Wieland Huttner at the Max Planck Institute of Molecular Cell Biology and Genetics for years. In 2016, his team found that a mutation in the ARHGAP11B gene, found in humans, Neanderthals, and Denisovans but not other primates, caused more production of cells that develop into neurons. Although our brains are roughly the same size as those of Neanderthals, our brain shapes differ and we created complex technologies they never developed. So, Huttner and his team set out to find genetic differences between Neanderthals and modern humans, especially in cells that give rise to neurons of the neocortex. This region behind the forehead is the largest and most recently evolved part of our brain, where major cognitive processes happen.

The team focused on TKTL1, a gene that in modern humans has a single amino acid change—from lysine to arginine—from the version in Neanderthals and other mammals. By analyzing previously published data, researchers found that TKTL1 was mainly expressed in progenitor cells called basal radial glia, which give rise to most of the cortical neurons during development.

Friday, September 3, 2021

What is consciousness, and could machines have it?

S. Dahaene, H. Lau, & S. Kouider
Science  27 Oct 2017:
Vol. 358, Issue 6362, pp. 486-492

Abstract

The controversial question of whether machines may ever be conscious must be based on a careful consideration of how consciousness arises in the only physical system that undoubtedly possesses it: the human brain. We suggest that the word “consciousness” conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report (C1, consciousness in the first sense), and the self-monitoring of those computations, leading to a subjective sense of certainty or error (C2, consciousness in the second sense). We argue that despite their recent successes, current machines are still mostly implementing computations that reflect unconscious processing (C0) in the human brain. We review the psychological and neural science of unconscious (C0) and conscious computations (C1 and C2) and outline how they may inspire novel machine architectures.

From Concluding remarks

Our stance is based on a simple hypothesis: What we call “consciousness” results from specific types of information-processing computations, physically realized by the hardware of the brain. It differs from other theories in being resolutely computational; we surmise that mere information-theoretic quantities do not suffice to define consciousness unless one also considers the nature and depth of the information being processed.

We contend that a machine endowed with C1 and C2 would behave as though it were conscious; for instance, it would know that it is seeing something, would express confidence in it, would report it to others, could suffer hallucinations when its monitoring mechanisms break down, and may even experience the same perceptual illusions as humans. Still, such a purely functional definition of consciousness may leave some readers unsatisfied. Are we “over-intellectualizing” consciousness, by assuming that some high-level cognitive functions are necessarily tied to consciousness? Are we leaving aside the experiential component (“what it is like” to be conscious)? Does subjective experience escape a computational definition?

Although those philosophical questions lie beyond the scope of the present paper, we close by noting that empirically, in humans the loss of C1 and C2 computations covaries with a loss of subjective experience. 

Wednesday, March 3, 2021

Evolutionary biology meets consciousness: essay review

Browning, H., Veit, W. 
Biol Philos 36, 5 (2021). 
https://doi.org/10.1007/s10539-021-09781-7

Abstract

In this essay, we discuss Simona Ginsburg and Eva Jablonka’s The Evolution of the Sensitive Soul from an interdisciplinary perspective. Constituting perhaps the longest treatise on the evolution of consciousness, Ginsburg and Jablonka unite their expertise in neuroscience and biology to develop a beautifully Darwinian account of the dawning of subjective experience. Though it would be impossible to cover all its content in a short book review, here we provide a critical evaluation of their two key ideas—the role of Unlimited Associative Learning in the evolution of, and detection of, consciousness and a metaphysical claim about consciousness as a mode of being—in a manner that will hopefully overcome some of the initial resistance of potential readers to tackle a book of this length.

Here is one portion:

Modes of being

The second novel idea within their book is to conceive of consciousness as a new mode of being, rather than a mere trait. This part of their argument may appear unusual to many operating in the debate, not the least because this formulation—not unlike their choice to include Aristotle’s sensitive soul in the title—evokes a sense of outdated and strange metaphysics. We share some of this opposition to this vocabulary, but think it best conceived as a metaphor.

They begin their book by introducing the idea of teleological (goal-directed) systems and the three ‘modes of being’, taken from the works of Aristotle, each of which is considered to have a unique telos (goal). These are: life (survival/reproduction), sentience (value ascription to stimuli), and rationality (value ascription to concepts). The focus of this book is the second of these—the “sensitive soul”. Rather than a trait, such as vision, G&J see consciousness as a mode of being, in the same way as the emergence of life and rational thought also constitute new modes of being.

In several places throughout their book, G&J motivate their account through this analogy, i.e. by drawing a parallel from consciousness to life and/or rationality. Neither, they think, can be captured in a simple definition or trait, thus explaining the lack of progress on trying to come up with definitions for these phenomena. Compare their discussion of the distinction between life and non-life. Life, they argue, is not a functional trait that organisms possess, but rather a new way of being that opens up new possibilities; so too with consciousness. It is a new form of biological organization at a level above the organism that gives rise to a “new type of goal-directed system”, one which faces a unique set of challenges and opportunities. They identify three such transitions—the transition from non-life to life (the “nutritive soul”), the transition from non-conscious to conscious (the “sensitive soul”) and the transition from non-rational to rational (the “rational soul”). All three transitions mark a change to a new form of being, one in which the types of goals change. But while this is certainly correct in the sense of constituting a radical transformation in the kinds of goal-directed systems there are, we have qualms with the idea that this formal equivalence or abstract similarity can be used to ground more concrete properties. Yet G&J use this analogy to motivate their UAL account in parallel to unlimited heredity as a transition marker of life.

Wednesday, November 25, 2020

The subjective turn

Jon Stewart
aeon.co
Originally posted 2 Nov 20

What is the human being? Traditionally, it was thought that human nature was something fixed, given either by nature or by God, once and for all. Humans occupy a unique place in creation by virtue of a specific combination of faculties that they alone possess, and this is what makes us who we are. This view comes from the schools of ancient philosophy such as Platonism, Aristotelianism and Stoicism, as well as the Christian tradition. More recently, it has been argued that there is actually no such thing as human nature but merely a complex set of behaviours and attitudes that can be interpreted in different ways. For this view, all talk of a fixed human nature is merely a naive and convenient way of discussing the human experience, but doesn’t ultimately correspond to any external reality. This view can be found in the traditions of existentialism, deconstruction and different schools of modern philosophy of mind.

There is, however, a third approach that occupies a place between these two. This view, which might be called historicism, claims that there is a meaningful conception of human nature, but that it changes over time as human society develops. This approach is most commonly associated with the German philosopher G W F Hegel (1770-1831). He rejects the claim of the first view, that of the essentialists, since he doesn’t think that human nature is something given or created once and for all. But he also rejects the second view since he doesn’t believe that the notion of human nature is just an outdated fiction we’ve inherited from the tradition. Instead, Hegel claims that it’s meaningful and useful to talk about the reality of some kind of human nature, and that this can be understood by an analysis of human development in history. Unfortunately, Hegel wrote in a rather inaccessible fashion, which has led many people to dismiss his views as incomprehensible or confused. His theory of philosophical anthropology, which is closely connected to his theory of historical development, has thus remained the domain of specialists. It shouldn’t.

With his astonishing wealth of knowledge about history and culture, Hegel analyses the ways in which what we today call subjectivity and individuality first arose and developed through time. He holds that, at the beginning of human history, people didn’t conceive of themselves as individuals in the same way that we do today. There was no conception of a unique and special inward sphere that we value so much in our modern self-image. Instead, the ancients conceived of themselves primarily as belonging to a larger group: the family, the tribe, the state, etc. This meant that questions of individual freedom or self-determination didn’t arise in the way that we’re used to understanding them.

Thursday, August 15, 2019

Elon Musk unveils Neuralink’s plans for brain-reading ‘threads’ and a robot to insert them

Elizabeth Lopatto
www.theverge.com
Originally published July 16, 2019

Here is an excerpt:

“It’s not going to be suddenly Neuralink will have this neural lace and start taking over people’s brains,” Musk said. “Ultimately” he wants “to achieve a symbiosis with artificial intelligence.” And that even in a “benign scenario,” humans would be “left behind.” Hence, he wants to create technology that allows a “merging with AI.” He later added “we are a brain in a vat, and that vat is our skull,” and so the goal is to read neural spikes from that brain.

The first paralyzed person to receive a brain implant that allowed him to control a computer cursor was Matthew Nagle. In 2006, Nagle, who had a spinal cord injury, played Pong using only his mind; the basic movement required took him only four days to master, he told The New York Times. Since then, paralyzed people with brain implants have also brought objects into focus and moved robotic arms in labs, as part of scientific research. The system Nagle and others have used is called BrainGate and was developed initially at Brown University.

“Neuralink didn’t come out of nowhere, there’s a long history of academic research here,” Hodak said at the presentation on Tuesday. “We’re, in the greatest sense, building on the shoulders of giants.” However, none of the existing technologies fit Neuralink’s goal of directly reading neural spikes in a minimally invasive way.

The system presented today, if it’s functional, may be a substantial advance over older technology. BrainGate relied on the Utah Array, a series of stiff needles that allows for up to 128 electrode channels. Not only is that fewer channels than Neuralink is promising — meaning less data from the brain is being picked up — it’s also stiffer than Neuralink’s threads. That’s a problem for long-term functionality: the brain shifts in the skull but the needles of the array don’t, leading to damage. The thin polymers Neuralink is using may solve that problem.

The info is here.

Tuesday, August 7, 2018

Thousands of leading AI researchers sign pledge against killer robots

Ian Sample
The Guardian
Originally posted July 18, 2018

Here is an excerpt:

The military is one of the largest funders and adopters of AI technology. With advanced computer systems, robots can fly missions over hostile terrain, navigate on the ground, and patrol under seas. More sophisticated weapon systems are in the pipeline. On Monday, the defence secretary Gavin Williamson unveiled a £2bn plan for a new RAF fighter, the Tempest, which will be able to fly without a pilot.

UK ministers have stated that Britain is not developing lethal autonomous weapons systems and that its forces will always have oversight and control of the weapons it deploys. But the campaigners warn that rapid advances in AI and other fields mean it is now feasible to build sophisticated weapons that can identify, track and fire on human targets without consent from a human controller. For many researchers, giving machines the decision over who lives and dies crosses a moral line.

“We need to make it the international norm that autonomous weapons are not acceptable. A human must always be in the loop,” said Toby Walsh, a professor of AI at the University of New South Wales in Sydney who signed the pledge.

The info is here.

Tuesday, May 2, 2017

Would You Become An Immortal Machine?

Marcelo Gleiser
npr.org
Originally posted March 27, 2017

Here is an excerpt:

"A man is a god in ruins," wrote Ralph Waldo Emerson. This quote, which O'Connell places at the book's opening page, captures the essence of the quest. If man is a failed god, there may be a way to fix this. Since "The Fall," we "lost" our god-like immortality, and have been looking for ways to regain it. Can science do this? Is mortality merely a scientific question? Suppose that it is — and that we can fix it, as we can a headache. Would you pay the price by transferring your "essence" to a non-human entity that will hold it, be it silicone or some kind of artificial robot? Can you be you when you don't have your body? Are you really just transferrable information?

As O'Connell meets an extraordinary group of people, from serious scientists and philosophers to wackos, he keeps asking himself this question, knowing fully well his answer: Absolutely not! What makes us human is precisely our fallibility, our connection to our bodies, the existential threat of death. Remove that and we are a huge question mark, something we can't even contemplate. No thanks, says O'Connell, in a deliciously satiric style, at once lyrical, informative, and captivating.

The article is here.

Wednesday, December 21, 2016

The Case Against Reality

Amanda Gefter
The Atlantic
Originally published April 25, 2016

Here is an excerpt:

Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. What’s more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.

Getting at questions about the nature of reality, and disentangling the observer from the observed, is an endeavor that straddles the boundaries of neuroscience and fundamental physics. On one side you’ll find researchers scratching their chins raw trying to understand how a three-pound lump of gray matter obeying nothing more than the ordinary laws of physics can give rise to first-person conscious experience. This is the aptly named “hard problem.”

On the other side are quantum physicists, marveling at the strange fact that quantum systems don’t seem to be definite objects localized in space until we come along to observe them. Experiment after experiment has shown—defying common sense—that if we assume that the particles that make up ordinary objects have an objective, observer-independent existence, we get the wrong answers. The central lesson of quantum physics is clear: There are no public objects sitting out there in some preexisting space. As the physicist John Wheeler put it, “Useful as it is under ordinary circumstances to say that the world exists ‘out there’ independent of us, that view can no longer be upheld.”

The article is here.

Sunday, May 3, 2015

What if a bionic leg is so good that someone chooses to amputate?

By Jemima Kiss
The Guardian
Originally published April 9, 2015

Here is an excerpt:

Bionics will become so appealing that some people may choose to amputate just so that they can augment their bodies; our own legs might begin to feel heavy and stupid, he thinks. Given cosmetic surgery now, how would we feel about going under the knife for an arguably more justifiable benefit? This raises some intensely challenging issues about whether we will see a far more profound human digital divide, already hinted at in sci-fi countless times: the augmented, and the unaugmented.

In this view of the body as a biological machine, the parts that don’t work can be replaced, improved, remodelled.

The entire article is here.