Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Human Brain. Show all posts
Showing posts with label Human Brain. Show all posts

Saturday, November 19, 2022

Human mini-brains were transplanted into rats. Is this ethical?

Julian Savulescu
channelnewsasia.com
Originally posted 22 OCT 22

Here is an excerpt:

Are 'Humanized Rats' just rats?

In a world-first, scientists have transplanted human brain cells into the brains of baby rats, offering immense possibilities to study and develop treatment for neurological and psychiatric conditions.

The human brain tissue, known as brain organoids or “mini-organs”, are independent nerve structures grown in a lab from a person’s cells, such as their skin cells, using stem cell technology. Although they can’t yet replicate a full brain, they resemble features or parts of an embryonic human brain.

The study, published in the journal Nature on Oct 12, showed that the human organoids integrated into the rat brain and function, and were even capable of affecting the behaviour of the rats.

A few months later, up to one-sixth of the rat cortex was human. In terms of their biology, they were “humanised rats”.

This is an exciting discovery for science. It will allow brain organoids to grow bigger than they have in a lab, and opens up many possibilities of understanding how early human neurons develop and form the brain, and what goes wrong in disease. It also raises the possibility of organoids being used to treat brain injury.

Indeed, the rat models showed the neuronal defects related to one rare severe disease called Timothy Syndrome, a genetic condition that affects brain development and causes severe autism.

This is one step further along the long road to making progress in brain disease, which has proved so intransigent so far.

The research must go ahead. But at the same time, it calls for new standards to be set for future research. At present, the research raises no significant new ethical issues. However, it opens the door to more elaborate or ambitious research that could raise significant ethical issues.

Moral Status of Animals with Human Tissue

The human tissue transplanted into the rats’ brains were in a region that processes sensory information such as touch and pain.

These organoids did not increase the capacities of the rats. But as larger organoids are introduced, or organoids are introduced affecting more key areas of the brain, the rat brain may acquire more advanced consciousness, including higher rational capacities or self-consciousness.

This would raise issues of how such “enhanced” rats ought to be treated. It would be important to not treat them as rats, just because they look like rats, if their brains are significantly enhanced.

This requires discussion and boundaries set around what kinds of organoids can be implanted and what key sites would be targets for enhancement of capacities that matter to moral status.

Thursday, August 8, 2019

Microsoft wants to build artificial general intelligence: an AI better than humans at everything

A humanoid robot stands in front of a screen displaying the letters “AI.”Kelsey Piper 
www.vox.com
Originally published July 22, 2019

Here is an excerpt:

Existing AI systems beat humans at lots of narrow tasks — chess, Go, Starcraft, image generation — and they’re catching up to humans at others, like translation and news reporting. But an artificial general intelligence would be one system with the capacity to surpass us at all of those things. Enthusiasts argue that it would enable centuries of technological advances to arrive, effectively, all at once — transforming medicine, food production, green technologies, and everything else in sight.

Others warn that, if poorly designed, it could be a catastrophe for humans in a few different ways. A sufficiently advanced AI could pursue a goal that we hadn’t intended — a recipe for catastrophe. It could turn out unexpectedly impossible to correct once running. Or it could be maliciously used by a small group of people to harm others. Or it could just make the rich richer and leave the rest of humanity even further in the dust.

Getting AGI right may be one of the most important challenges ahead for humanity. Microsoft’s billion dollar investment has the potential to push the frontiers forward for AI development, but to get AGI right, investors have to be willing to prioritize safety concerns that might slow commercial development.

The info is here.

Thursday, November 30, 2017

Why We Should Be Concerned About Artificial Superintelligence

Matthew Graves
Skeptic Magazine
Originally published November 2017

Here is an excerpt:

Our intelligence is ultimately a mechanistic process that happens in the brain, but there is no reason to assume that human intelligence is the only possible form of intelligence. And while the brain is complex, this is partly an artifact of the blind, incremental progress that shaped it—natural selection. This suggests that developing machine intelligence may turn out to be a simpler task than reverse- engineering the entire brain. The brain sets an upper bound on the difficulty of building machine intelligence; work to date in the field of artificial intelligence sets a lower bound; and within that range, it’s highly uncertain exactly how difficult the problem is. We could be 15 years away from the conceptual breakthroughs required, or 50 years away, or more.

The fact that artificial intelligence may be very different from human intelligence also suggests that we should be very careful about anthropomorphizing AI. Depending on the design choices AI scientists make, future AI systems may not share our goals or motivations; they may have very different concepts and intuitions; or terms like “goal” and “intuition” may not even be particularly applicable to the way AI systems think and act. AI systems may also have blind spots regarding questions that strike us as obvious. AI systems might also end up far more intelligent than any human.

The last possibility deserves special attention, since superintelligent AI has far more practical significance than other kinds of AI.

AI researchers generally agree that superintelligent AI is possible, though they have different views on how and when it’s likely to be developed. In a 2013 survey, top-cited experts in artificial intelligence assigned a median 50% probability to AI being able to “carry out most human professions at least as well as a typical human” by the year 2050, and also assigned a 50% probability to AI greatly surpassing the performance of every human in most professions within 30 years of reaching that threshold.

The article is here.