Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Transhumanism. Show all posts
Showing posts with label Transhumanism. Show all posts

Monday, January 1, 2024

Cyborg computer with living brain organoid aces machine learning tests

Loz Blain
New Atlas
Originally posted 12 DEC 23

Here are two excerpts:

Now, Indiana University researchers have taken a slightly different approach by growing a brain "organoid" and mounting that on a silicon chip. The difference might seem academic, but by allowing the stem cells to self-organize into a three-dimensional structure, the researchers hypothesized that the resulting organoid might be significantly smarter, that the neurons might exhibit more "complexity, connectivity, neuroplasticity and neurogenesis" if they were allowed to arrange themselves more like the way they normally do.

So they grew themselves a little brain ball organoid, less than a nanometer in diameter, and they mounted it on a high-density multi-electrode array – a chip that's able to send electrical signals into the brain organoid, as well as reading electrical signals that come out due to neural activity.

They called it "Brainoware" – which they probably meant as something adjacent to hardware and software, but which sounds far too close to "BrainAware" for my sensitive tastes, and evokes the perpetual nightmare of one of these things becoming fully sentient and understanding its fate.

(cut)

And finally, much like the team at Cortical Labs, this team really has no clear idea what to do about the ethics of creating micro-brains out of human neurons and wiring them into living cyborg computers. “As the sophistication of these organoid systems increases, it is critical for the community to examine the myriad of neuroethical issues that surround biocomputing systems incorporating human neural tissue," wrote the team. "It may be decades before general biocomputing systems can be created, but this research is likely to generate foundational insights into the mechanisms of learning, neural development and the cognitive implications of neurodegenerative diseases."


Here is my summary:

There is a new type of computer chip that uses living brain cells. The brain cells are grown from human stem cells and are organized into a ball-like structure called an organoid. The organoid is mounted on a chip that can send electrical signals to the brain cells and read the electrical signals that the brain cells produce. The researchers found that the organoid could learn to perform tasks such as speech recognition and math prediction much faster than traditional computers. They believe that this new type of computer chip could have many applications, such as in artificial intelligence and medical research. However, there are also some ethical concerns about using living brain cells in computers.

Monday, August 14, 2023

Artificial intelligence, superefficiency and the end of work: a humanistic perspective on meaning in life

Knell, S., & RĂ¼ther, M. (2023). 
AI and Ethics.

Abstract

How would it be assessed from an ethical point of view if human wage work were replaced by artificially intelligent systems (AI) in the course of an automation process? An answer to this question has been discussed above all under the aspects of individual well-being and social justice. Although these perspectives are important, in this article, we approach the question from a different perspective: that of leading a meaningful life, as understood in analytical ethics on the basis of the so-called meaning-in-life debate. Our thesis here is that a life without wage work loses specific sources of meaning, but can still be sufficiently meaningful in certain other ways. Our starting point is John Danaher’s claim that ubiquitous automation inevitably leads to an achievement gap. Although we share this diagnosis, we reject his provocative solution according to which game-like virtual realities could be an adequate substitute source of meaning. Subsequently, we outline our own systematic alternative which we regard as a decidedly humanistic perspective. It focuses both on different kinds of social work and on rather passive forms of being related to meaningful contents. Finally, we go into the limits and unresolved points of our argumentation as part of an outlook, but we also try to defend its fundamental persuasiveness against a potential objection.

Concluding remarks

In this article, we explored the question of how we can find meaning in a post-work world. Our answer relies on a critique of John Danaher’s utopia of games and tries to stick to the humanistic idea, namely to the idea that we do not have to alter our human lifeform in an extensive way and also can keep up our orientation towards common ideals, such as working towards the good, the true and the beautiful.

Our proposal still has some shortcomings, which include the following two that we cannot deal with extensively but at least want to briefly comment on. First, we assumed that certain professional fields, especially in the meaning conferring area of the good, cannot be automated, so that the possibility of mini-jobs in these areas can be considered.  This assumption is based on a substantial thesis from the
philosophy of mind, namely that AI systems cannot develop consciousness and consequently also no genuine empathy.  This assumption needs to be further elaborated, especially in view of some forecasts that even the altruistic and philanthropic professions are not immune to the automation of superefficient systems. Second, we have adopted without further critical discussion the premise of the hybrid standard model of a meaningful life according to which meaning conferring objective value is to be found in the three spheres of the true, the good, and the beautiful. We take this premise to be intuitively appealing, but a further elaboration of our argumentation would have to try to figure out, whether this trias is really exhaustive, and if so, due to which underlying more general principle.


Full transparency, big John Danaher fan.  Regardless, here is my summary:

Humans are meaning makers. We find meaning in our work, our relationships, and our engagement with the world. The article discusses the potential impact of AI on the meaning of work, and I agree that the authors make some good points. However, I think their solution is somewhat idealistic. It is true that social relationships and engagement with the world can provide us with meaning, but these activities will be difficult to achieve, especially in a world where AI is doing most of the work.  We will need ways to cooperate, achieve, and interact to engage in behaviors that are geared toward super-ordinate goals.  Humans need to align their lives with core human principles, such as meaning-making, pattern repetition, cooperation, and values-based behaviors.
  • The authors focus on the potential impact of AI on the meaning of work, but they also acknowledge that other factors, such as automation and globalization, are also having an impact.
  • The authors' solution is based on the idea that meaning comes from relationships and engagement with the world. However, there are other theories about the meaning of life, such as the idea that meaning comes from self-actualization or from religious faith.
  • The authors acknowledge that their solution is not perfect, but they argue that it is a better alternative than Danaher's solution. However, I think it is important to consider all of the options before deciding which one is best.  Ultimately, it will come down to a values-based decision, as there seems to be no one right or correct solution.

Sunday, June 4, 2023

We need to examine the beliefs of today’s tech luminaries

Anjana Ahuja
Financial Times
Originally posted 10 MAY 23

People who are very rich or very clever, or both, sometimes believe weird things. Some of these beliefs are captured in the acronym Tescreal. The letters represent overlapping futuristic philosophies — bookended by transhumanism and longtermism — favoured by many of AI’s wealthiest and most prominent supporters.

The label, coined by a former Google ethicist and a philosopher, is beginning to circulate online and usefully explains why some tech figures would like to see the public gaze trained on fuzzy future problems such as existential risk, rather than on current liabilities such as algorithmic bias. A fraternity that is ultimately committed to nurturing AI for a posthuman future may care little for the social injustices committed by their errant infant today.

As well as transhumanism, which advocates for the technological and biological enhancement of humans, Tescreal encompasses extropianism, a belief that science and technology will bring about indefinite lifespan; singularitarianism, the idea that an artificial superintelligence will eventually surpass human intelligence; cosmism, a manifesto for curing death and spreading outwards into the cosmos; rationalism, the conviction that reason should be the supreme guiding principle for humanity; effective altruism, a social movement that calculates how to maximally benefit others; and longtermism, a radical form of utilitarianism which argues that we have moral responsibilities towards the people who are yet to exist, even at the expense of those who currently do.

(cut, and the ending)

Gebru, along with others, has described such talk as fear-mongering and marketing hype. Many will be tempted to dismiss her views — she was sacked from Google after raising concerns over energy use and social harms linked to large language models — as sour grapes, or an ideological rant. But that glosses over the motivations of those running the AI show, a dazzling corporate spectacle with a plot line that very few are able to confidently follow, let alone regulate.

Repeated talk of a possible techno-apocalypse not only sets up these tech glitterati as guardians of humanity, it also implies an inevitability in the path we are taking. And it distracts from the real harms racking up today, identified by academics such as Ruha Benjamin and Safiya Noble. Decision-making algorithms using biased data are deprioritising black patients for certain medical procedures, while generative AI is stealing human labour, propagating misinformation and putting jobs at risk.

Perhaps those are the plot twists we were not meant to notice.


Tuesday, August 10, 2021

The irrationality of transhumanists

Susan B. Levin
iai.tv Issue 9
Originally posted 11 Jan 21

Bioenhancement is among the hottest topics in bioethics today. The most contentious area of debate here is advocacy of “radical” enhancement (aka transhumanism). Because transhumanists urge us to categorically heighten select capacities, above all, rationality, it would be incorrect to say that the possessors of these abilities were human beings: to signal, unmistakably, the transcendent status of these beings, transhumanists call them “posthuman,” “godlike,” and “divine.” For many, the idea of humanity’s technological self-transcendence has a strong initial appeal; that appeal, intensified by transhumanists’ relentless confidence that radical bioenhancement will occur if only we commit adequate resources to the endeavor, yields a viscerally potent combination. On this of all topics, however, we should not let ourselves be ruled by viscera. 

Transhumanists present themselves as the sole rational parties to the debate over radical bioenhancement: merely questioning a dedication to skyrocketing rational capacity or lifespan testifies to one’s irrationality. Scientifically, for this charge of irrationality not to be intellectually perverse, the evidence on transhumanists’ side would have to be overwhelming.

(cut)

Transhumanists are committed to extreme rational essentialism: they treasure the limitless augmentation of rational capacity, treating affect as irrelevant or targeting it (at minimum, the so-called negative variety) for elimination. Further disrupting transhumanists’ fixation with radical cognitive bioenhancement, therefore, is the finding that pharmacological boosts, such as they are, may not be entirely or even mainly cognitive. Motivation may be strengthened, with resulting boosts to subjects’ informational facility. What’s more, being in a “positive” (i.e., happy) mood can impair cognitive performance, while being in a “negative” (i.e., sad) one can strengthen it by, for instance, making subjects more disposed to reject stereotypes. 

Tuesday, November 5, 2019

Will Robots Wake Up?

Susan Schneider
orbitermag.com
Originally published September 30, 2019

Machine consciousness, if it ever exists, may not be found in the robots that tug at our heartstrings, like R2D2. It may instead reside in some unsexy server farm in the basement of a computer science building at MIT. Or perhaps it will exist in some top-secret military program and get snuffed out, because it is too dangerous or simply too inefficient.

AI consciousness likely depends on phenomena that we cannot, at this point, gauge—such as whether some microchip yet to be invented has the right configuration, or whether AI developers or the public want conscious AI. It may even depend on something as unpredictable as the whim of a single AI designer, like Anthony Hopkins’s character in Westworld. The uncertainty we face moves me to a middle-of-the-road position, one that stops short of either techno-optimism (believing that technology can solve our problems) or biological naturalism.

This approach I call, simply, the “Wait and See Approach.”

In keeping with my desire to look at real-world considerations that speak to whether AI consciousness is even compatible with the laws of nature—and, if so, whether it is technologically feasible or even interesting to build—my discussion draws from concrete scenarios in AI research and cognitive science.

The info is here.

Friday, October 26, 2018

The Ethics Of Transhumanism And The Cult Of Futurist Biotech

Julian Vigo
Forbes.com
Originally posted September 24, 2018

Here is an excerpt:

The philosophical tenets, academic theories, and institutional practices of transhumanism are well-known. Max More, a British philosopher and leader of the extropian movement claims that transhumanism is the “continuation and acceleration of the evolution of intelligent life beyond its currently human form and human limitations by means of science and technology, guided by life-promoting principles and values.” This very definition, however, is a paradox since the ethos of this movement is to promote life through that which is not life, even by removing pieces of life, to create something billed as meta-life. Indeed, it is clear that transhumanism banks on its own contradiction: that life is deficient as is, yet can be bettered by prolonging life even to the detriment of life.

Stefan Lorenz Sorgner is a German philosopher and bioethicist who has written widely on the ethical implications of transhumanism to include writings on cryonics and longevity of human life, all of which which go against most ecological principles given the amount of resources needed to keep a body in “suspended animation” post-death. At the heart of Sorgner’s writings, like those of Kyle Munkittrick, invoke an almost naĂ¯ve rejection of death, noting that death is neither “natural” nor a part of human evolution. In fact, much of the writings on transhumanism take a radical approach to technology: anyone who dare question that cutting off healthy limbs to make make way for a super-Olympian sportsperson would be called a Luddite, anti-technology. But that is a false dichotomy since most critics of transhumanism are not against all technology, but question the ethics of any technology that interferes with the human rights of humans.

The info is here.

Wednesday, September 26, 2018

Do psychotropic drugs enhance, or diminish, human agency?

Rami Gabriel
aeon.co
Originally posted September 3, 2018

Here is an excerpt:

Psychological medications such as Xanax, Ritalin and aspirin help to modify undesirable behaviours, thought patterns and the perception of pain. They purport to treat the underlying chemical cause rather than the social, interpersonal or psychodynamic causes of pathology. Self-knowledge gained by introspection and dialogue are no longer our primary means for modifying psychological states. By prescribing such medication, physicians are implicitly admitting that cognitive and behavioural training is insufficient and impractical, and that ‘the brain’, of which nonspecialists have little explicit understanding, is in fact the level where errors occur. Indeed, drugs are reliable and effective because they implement the findings of neuroscience and supplement (or in many cases substitute for) our humanist discourse about self-development and agency. In using such drugs, we become transhuman hybrid beings who build tools into the regulatory plant of the body.

Recreational drugs, on the other hand, are essentially hedonic tools that allow for stress-release and the diminishment of inhibition and sense of responsibility. Avenues of escape are reached through derangement of thought and perception; many find pleasure in this transcendence of quotidian experience and transgression of social norms. There is also a Dionysian, or spiritual, purpose to recreational inebriation, which can enable revelations that enhance intimacy and the emotional need for existential reflection. Here drugs act as portals into spiritual rituals and otherwise restricted metaphysical spaces. The practice of imbibing a sacred substance is as old as ascetic and mindfulness practices but, in our times, drugs are overwhelmingly the most commonly used tool for tending to this element of the human condition.

The info is here.

Wednesday, October 4, 2017

Better Minds, Better Morals: A Procedural Guide to Better Judgment

G. Owen Schaefer and Julian Savulescu
Journal of Posthuman Studies
Vol. 1, No. 1, Journal of Posthuman Studies (2017), pp. 26-43

Abstract:

Making more moral decisions – an uncontroversial goal, if ever there was one. But how to go about it? In this article, we offer a practical guide on ways to promote good judgment in our personal and professional lives. We will do this not by outlining what the good life consists in or which values we should accept. Rather, we offer a theory of  procedural reliability : a set of dimensions of thought that are generally conducive to good moral reasoning. At the end of the day, we all have to decide for ourselves what is good and bad, right and wrong. The best way to ensure we make the right choices is to ensure the procedures we’re employing are sound and reliable. We identify four broad categories of judgment to be targeted – cognitive, self-management, motivational and interpersonal. Specific factors within each category are further delineated, with a total of 14 factors to be discussed. For each, we will go through the reasons it generally leads to more morally reliable decision-making, how various thinkers have historically addressed the topic, and the insights of recent research that can offer new ways to promote good reasoning. The result is a wide-ranging survey that contains practical advice on how to make better choices. Finally, we relate this to the project of transhumanism and prudential decision-making. We argue that transhumans will employ better moral procedures like these. We also argue that the same virtues will enable us to take better control of our own lives, enhancing our responsibility and enabling us to lead better lives from the prudential perspective.

A copy of the article is here.

Sunday, August 27, 2017

Super-intelligence and eternal life

Transhumanism’s faithful follow it blindly into a future for the elite

Alexander Thomas
The Conversation
First published July 31, 2017

The rapid development of so-called NBIC technologies – nanotechnology, biotechnology, information technology and cognitive science – are giving rise to possibilities that have long been the domain of science fiction. Disease, ageing and even death are all human realities that these technologies seek to end.

They may enable us to enjoy greater “morphological freedom” – we could take on new forms through prosthetics or genetic engineering. Or advance our cognitive capacities. We could use brain-computer interfaces to link us to advanced artificial intelligence (AI).

Nanobots could roam our bloodstream to monitor our health and enhance our emotional propensities for joy, love or other emotions. Advances in one area often raise new possibilities in others, and this “convergence” may bring about radical changes to our world in the near-future.

“Transhumanism” is the idea that humans should transcend their current natural state and limitations through the use of technology – that we should embrace self-directed human evolution. If the history of technological progress can be seen as humankind’s attempt to tame nature to better serve its needs, transhumanism is the logical continuation: the revision of humankind’s nature to better serve its fantasies.

The article is here.

Sunday, July 30, 2017

Engineering Eden: The quest for eternal life

Kristin Kostick
Baylor College of Medicine
Originally posted June 2,2017

If you’re like most people, you may associate the phrase “eternal life” with religion: The promise that we can live forever if we just believe in God. You probably don’t associate the phrase with an image of scientists working in a lab, peering at worms through microscopes or mice skittering through boxes. But you should.

The quest for eternal life has only recently begun to step out from behind the pews and into the petri dish.

I recently discussed the increasing feasibility of the transhumanist vision due to continuing advancements in biotech, gene- and cell-therapies. These emerging technologies, however, don’t erase the fact that religion – not science – has always been our salve for confronting death’s inevitability. For believers, religion provides an enduring mechanism (belief and virtue) behind the perpetuity of existence, and shushes our otherwise frantic inability to grasp: How can I, as a person, just end?

The Mormon transhumanist Lincoln Cannon argues that science, rather than religion, offers a tangible solution to this most basic existential dilemma. He points out that it is no longer tenable to believe in eternal life as only available in heaven, requiring the death of our earthly bodies before becoming eternal, celestial beings.

Would a rational person choose to believe in an uncertain, spiritual afterlife over the tangible persistence of one’s own familiar body and the comforting security of relationships we’ve fostered over a lifetime of meaningful interactions?

The article is here.

Tuesday, May 2, 2017

Would You Become An Immortal Machine?

Marcelo Gleiser
npr.org
Originally posted March 27, 2017

Here is an excerpt:

"A man is a god in ruins," wrote Ralph Waldo Emerson. This quote, which O'Connell places at the book's opening page, captures the essence of the quest. If man is a failed god, there may be a way to fix this. Since "The Fall," we "lost" our god-like immortality, and have been looking for ways to regain it. Can science do this? Is mortality merely a scientific question? Suppose that it is — and that we can fix it, as we can a headache. Would you pay the price by transferring your "essence" to a non-human entity that will hold it, be it silicone or some kind of artificial robot? Can you be you when you don't have your body? Are you really just transferrable information?

As O'Connell meets an extraordinary group of people, from serious scientists and philosophers to wackos, he keeps asking himself this question, knowing fully well his answer: Absolutely not! What makes us human is precisely our fallibility, our connection to our bodies, the existential threat of death. Remove that and we are a huge question mark, something we can't even contemplate. No thanks, says O'Connell, in a deliciously satiric style, at once lyrical, informative, and captivating.

The article is here.

Tuesday, April 18, 2017

‘Your animal life is over. Machine life has begun.’

Mark O'Connell
The Guardian
Originally published March 25, 2017

Here is an excerpt:

The relevant science for whole brain emulation is, as you’d expect, hideously complicated, and its interpretation deeply ambiguous, but if I can risk a gross oversimplification here, I will say that it is possible to conceive of the idea as something like this: first, you scan the pertinent information in a person’s brain – the neurons, the endlessly ramifying connections between them, the information-processing activity of which consciousness is seen as a byproduct – through whatever technology, or combination of technologies, becomes feasible first (nanobots, electron microscopy, etc). That scan then becomes a blueprint for the reconstruction of the subject brain’s neural networks, which is then converted into a computational model. Finally, you emulate all of this on a third-party non-flesh-based substrate: some kind of supercomputer or a humanoid machine designed to reproduce and extend the experience of embodiment – something, perhaps, like Natasha Vita-More’s Primo Posthuman.

The whole point of substrate independence, as Koene pointed out to me whenever I asked him what it would be like to exist outside of a human body, – and I asked him many times, in various ways – was that it would be like no one thing, because there would be no one substrate, no one medium of being. This was the concept transhumanists referred to as “morphological freedom” – the liberty to take any bodily form technology permits.

“You can be anything you like,” as an article about uploading in Extropy magazine put it in the mid-90s. “You can be big or small; you can be lighter than air and fly; you can teleport and walk through walls. You can be a lion or an antelope, a frog or a fly, a tree, a pool, the coat of paint on a ceiling.”

The article is here.

Monday, April 3, 2017

Can Human Evolution Be Controlled?

William B. Hurlbut
Big Questions Online
Originally published February 17, 2017

Here is an excerpt:

These gene-editing techniques may transform our world as profoundly as many of the greatest scientific discoveries and technological innovations of the past — like electricity, synthetic chemistry, and nuclear physics. CRISPR/Cas9 could provide urgent and uncontroversial progress in biomedical science, agriculture, and environmental ecology. Indeed, the power and depth of operation of these new tools is delivering previously unimagined possibilities for reworking or redeploying natural biological processes — some with startling and disquieting implications. Proposals by serious and well-respected scientists include projects of broad ecological engineering, de-extinction of human ancestral species, a biotechnological “cure” for aging, and guided evolution of the human future.

The questions raised by such projects go beyond issues of individual rights and social responsibilities to considerations of the very source and significance of the natural world, its integrated and interdependent processes, and the way these provide the foundational frame for the physical, psychological, and spiritual meaning of human life.

The article is here.