Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Progress. Show all posts
Showing posts with label Progress. Show all posts

Friday, March 15, 2024

The consciousness wars: can scientists ever agree on how the mind works?

Mariana Lenharo

Neuroscientist Lucia Melloni didn’t expect to be reminded of her parents’ divorce when she attended a meeting about consciousness research in 2018. But, much like her parents, the assembled academics couldn’t agree on anything.

The group of neuroscientists and philosophers had convened at the Allen Institute for Brain Science in Seattle, Washington, to devise a way to empirically test competing theories of consciousness against each other: a process called adversarial collaboration.

Devising a killer experiment was fraught. “Of course, each of them was proposing experiments for which they already knew the expected results,” says Melloni, who led the collaboration and is based at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany. Melloni, falling back on her childhood role, became the go-between.

The collaboration Melloni is leading is one of five launched by the Templeton World Charity Foundation, a philanthropic organization based in Nassau, the Bahamas. The charity funds research into topics such as spirituality, polarization and religion; in 2019, it committed US$20 million to the five projects.

The aim of each collaboration is to move consciousness research forward by getting scientists to produce evidence that supports one theory and falsifies the predictions of another. Melloni’s group is testing two prominent ideas: integrated information theory (IIT), which claims that consciousness amounts to the degree of ‘integrated information’ generated by a system such as the human brain; and global neuronal workspace theory (GNWT), which claims that mental content, such as perceptions and thoughts, becomes conscious when the information is broadcast across the brain through a specialized network, or workspace. She and her co-leaders had to mediate between the main theorists, and seldom invited them into the same room.
Here's what the article highlights:
  • Divisions abound: Researchers disagree on the very definition of consciousness, making comparisons between theories difficult. Some focus on subjective experience, while others look at the brain's functions.
  • Testing head-to-head: New research projects are directly comparing competing theories to see which one explains experimental data better. This could be a step towards finding a unifying explanation.
  • Heated debate: The recent critique of one prominent theory, Integrated Information Theory (IIT), shows the depth of the disagreements. Some question its scientific validity, while others defend it as a viable framework.
  • Hope for progress: Despite the disagreements, there's optimism. New research methods and a younger generation of researchers focused on collaboration could lead to breakthroughs in understanding this elusive phenomenon.

Thursday, January 4, 2024

Americans’ Trust in Scientists, Positive Views of Science Continue to Decline

Brian Kennedy & Alec Tyson
Pew Research
Originally published 14 NOV 23

Impact of science on society

Overall, 57% of Americans say science has had a mostly positive effect on society. This share is down 8 percentage points since November 2021 and down 16 points since before the start of the coronavirus outbreak.

About a third (34%) now say the impact of science on society has been equally positive as negative. A small share (8%) think science has had a mostly negative impact on society.

Trust in scientists

When it comes to the standing of scientists, 73% of U.S. adults have a great deal or fair amount of confidence in scientists to act in the public’s best interests. But trust in scientists is 14 points lower than it was at the early stages of the pandemic.

The share expressing the strongest level of trust in scientists – saying they have a great deal of confidence in them – has fallen from 39% in 2020 to 23% today.

As trust in scientists has fallen, distrust has grown: Roughly a quarter of Americans (27%) now say they have not too much or no confidence in scientists to act in the public’s best interests, up from 12% in April 2020.

Ratings of medical scientists mirror the trend seen in ratings of scientists generally. Read Chapter 1 of the report for a detailed analysis of this data.

Differences between Republicans and Democrats in ratings of scientists and science

Declining levels of trust in scientists and medical scientists have been particularly pronounced among Republicans and Republican-leaning independents over the past several years. In fact, nearly four-in-ten Republicans (38%) now say they have not too much or no confidence at all in scientists to act in the public’s best interests. This share is up dramatically from the 14% of Republicans who held this view in April 2020. Much of this shift occurred during the first two years of the pandemic and has persisted in more recent surveys.

My take on why this important:

Science is a critical driver of progress. From technological advancements to medical breakthroughs, scientific discoveries have dramatically improved our lives. Without public trust in science, these advancements may slow or stall.

Science plays a vital role in addressing complex challenges. Climate change, pandemics, and other pressing issues demand evidence-based solutions. Undermining trust in science weakens our ability to respond effectively to these challenges.

Erosion of trust can have far-reaching consequences. It can fuel misinformation campaigns, hinder scientific collaboration, and ultimately undermine public health and well-being.

Friday, November 3, 2023

Posthumanism’s Revolt Against Responsibility

Nolen Gertz
Commonweal Magazine
Originally published 31 Oct 23

Here is an excerpt:

A major problem with this view—one Kirsch neglects—is that it conflates the destructiveness of particular humans with the destructiveness of humanity in general. Acknowledging that climate change is driven by human activity should not prevent us from identifying precisely which humans and activities are to blame. Plenty of people are concerned about climate change and have altered their behavior by, for example, using public transportation, recycling, or being more conscious about what they buy. Yet this individual behavior change is not sufficient because climate change is driven by the large-scale behavior of corporations and governments.

In other words, it is somewhat misleading to say we have entered the “Anthropocene” because anthropos is not as a whole to blame for climate change. Rather, in order to place the blame where it truly belongs, it would be more appropriate—as Jason W. Moore, Donna J. Haraway, and others have argued—to say we have entered the “Capitalocene.” Blaming humanity in general for climate change excuses those particular individuals and groups actually responsible. To put it another way, to see everyone as responsible is to see no one as responsible. Anthropocene antihumanism is thus a public-relations victory for the corporations and governments destroying the planet. They can maintain business as usual on the pretense that human nature itself is to blame for climate change and that there is little or nothing corporations or governments can or should do to stop it, since, after all, they’re only human.

Kirsch does not address these straightforward criticisms of Anthropocene antihumanism. This throws into doubt his claim that he is cataloguing their views to judge whether they are convincing and to explore their likely impact. Kirsch does briefly bring up the activist Greta Thunberg as a potential opponent of the nihilistic antihumanists, but he doesn’t consider her challenge in depth. 

Here is my summary:

Anthropocene antihumanism is a pessimistic view that sees humanity as a destructive force on the planet. It argues that humans have caused climate change, mass extinctions, and other environmental problems, and that we are ultimately incapable of living in harmony with nature. Some Anthropocene antihumanists believe that humanity should go extinct, while others believe that we should radically change our way of life in order to avoid destroying ourselves and the planet.

Some bullets
  • Posthumanism is a broad philosophical movement that challenges the traditional view of what it means to be human.
  • Anthropocene antihumanism and transhumanism are two strands of posthumanism that share a common theme of revolt against responsibility.
  • Anthropocene antihumanists believe that humanity is so destructive that it is beyond redemption, and that we should therefore either go extinct or give up our responsibility to manage the planet.
  • Transhumanists believe that we can transcend our human limitations and create a new, posthuman species that is not bound by the same moral and ethical constraints as humans.
  • Kirsch argues that this revolt against responsibility is a dangerous trend, and that we should instead work to create a more sustainable and just future for all.

Monday, May 8, 2023

What Thomas Kuhn Really Thought about Scientific "Truth"

John Horgan
Scientific American
Originally posted 23 May 12

Here are two excerpts:

Denying the view of science as a continual building process, Kuhn held that a revolution is a destructive as well as a creative act. The proposer of a new paradigm stands on the shoulders of giants (to borrow Newton's phrase) and then bashes them over the head. He or she is often young or new to the field, that is, not fully indoctrinated. Most scientists yield to a new paradigm reluctantly. They often do not understand it, and they have no objective rules by which to judge it. Different paradigms have no common standard for comparison; they are "incommensurable," to use Kuhn's term. Proponents of different paradigms can argue forever without resolving their basic differences because they invest basic terms—motion, particle, space, time—with different meanings. The conversion of scientists is thus both a subjective and political process. It may involve sudden, intuitive understanding—like that finally achieved by Kuhn as he pondered Aristotle. Yet scientists often adopt a paradigm simply because it is backed by others with strong reputations or by a majority of the community.

Kuhn's view diverged in several important respects from the philosophy of Karl Popper, who held that theories can never be proved but only disproved, or "falsified." Like other critics of Popper, Kuhn argued that falsification is no more possible than verification; each process wrongly implies the existence of absolute standards of evidence, which transcend any individual paradigm. A new paradigm may solve puzzles better than the old one does, and it may yield more practical applications. "But you cannot simply describe the other science as false," Kuhn said. Just because modern physics has spawned computers, nuclear power and CD players, he suggested, does not mean it is truer, in an absolute sense, than Aristotle's physics. Similarly, Kuhn denied that science is constantly approaching the truth. At the end of Structure he asserted that science, like life on earth, does not evolve toward anything but only away from something.


Kuhn declared that, although his book was not intended to be pro-science, he is pro-science. It is the rigidity and discipline of science, Kuhn said, that makes it so effective at problem-solving. Moreover, science produces "the greatest and most original bursts of creativity" of any human enterprise. Kuhn conceded that he was partly to blame for some of the anti-science interpretations of his model. After all, in Structure he did call scientists committed to a paradigm "addicts"; he also compared them to the brainwashed characters in Orwell's 1984. Kuhn insisted that he did not mean to be condescending by using terms such as "mopping up" or "puzzle-solving" to describe what most scientists do. "It was meant to be descriptive." He ruminated a bit. "Maybe I should have said more about the glories that result from puzzle solving, but I thought I was doing that."

As for the word "paradigm," Kuhn conceded that it had become "hopelessly overused" and is "out of control." Like a virus, the word spread beyond the history and philosophy of science and infected the intellectual community at large, where it came to signify virtually any dominant idea. A 1974 New Yorker cartoon captured the phenomena. "Dynamite, Mr. Gerston!" gushed a woman to a smug-looking man. "You're the first person I ever heard use 'paradigm' in real life." The low point came during the Bush administration, when White House officials introduced an economic plan called "the New Paradigm" (which was really just trickle-down economics).

Wednesday, April 19, 2023

Meaning in Life in AI Ethics—Some Trends and Perspectives

Nyholm, S., Rüther, M. 
Philos. Technol. 36, 20 (2023). 


In this paper, we discuss the relation between recent philosophical discussions about meaning in life (from authors like Susan Wolf, Thaddeus Metz, and others) and the ethics of artificial intelligence (AI). Our goal is twofold, namely, to argue that considering the axiological category of meaningfulness can enrich AI ethics, on the one hand, and to portray and evaluate the small, but growing literature that already exists on the relation between meaning in life and AI ethics, on the other hand. We start out our review by clarifying the basic assumptions of the meaning in life discourse and how it understands the term ‘meaningfulness’. After that, we offer five general arguments for relating philosophical questions about meaning in life to questions about the role of AI in human life. For example, we formulate a worry about a possible meaningfulness gap related to AI on analogy with the idea of responsibility gaps created by AI, a prominent topic within the AI ethics literature. We then consider three specific types of contributions that have been made in the AI ethics literature so far: contributions related to self-development, the future of work, and relationships. As we discuss those three topics, we highlight what has already been done, but we also point out gaps in the existing literature. We end with an outlook regarding where we think the discussion of this topic should go next.


Meaning in Life in AI Ethics—Summary and Outlook

We have tried to show at least three things in this paper. First, we have noted that there is a growing debate on meaningfulness in some sub-areas of AI ethics, and particularly in relation to meaningful self-development, meaningful work, and meaningful relationships. Second, we have argued that this should come as no surprise. Philosophers working on meaning in life share the assumption that meaning in life is a partly autonomous value concept, which deserves ethical consideration. Moreover, as we argued in Section 4 above, there are at least five significant general arguments that can be formulated in support of the claim that questions of meaningfulness should play a prominent role in ethical discussions of newly emerging AI technologies. Third, we have also stressed that, although there is already some debate about AI and meaning in life, it does not mean that there is no further work to do. Rather, we think that the area of AI and its potential impacts on meaningfulness in life is a fruitful topic that philosophers have only begun to explore, where there is much room for additional in-depth discussions.

We will now close our discussion with three general remarks. The first is led by the observation that some of the main ethicists in the field have yet to explore their underlying meaning theory and its normative claims in a more nuanced way. This is not only a shortcoming on its own, but has some effect on how the field approaches issues. Are agency extension or moral abilities important for meaningful self-development? Should achievement gaps really play a central role in the discussion of meaningful work? And what about the many different aspects of meaningful relationships? These are only a few questions which can shed light on the presupposed underlying normative claims that are involved in the field. Here, further exploration at deeper levels could help us to see which things are important and which are not more clear, and finally in which directions the field should develop.

Friday, March 3, 2023

The Petrov dilemma: Moral responsibility in the age of ChatGPT

Julian Savulescu
NUS News
Originally published 13 FEB 23

Here is an excerpt:

3. Act ethically

What is to act ethically has been debated for thousands of years. But at its heart is a simple answer: add value to the lives of other people. For Lt-Col Petrov, doing nothing was the hardest choice: “All I had to do was to reach for the phone; to raise the direct line to our top commanders – but I couldn’t move. I felt like I was sitting on a hot frying pan.”

ChatGPT is an extremely powerful technology with enormous potential for human benefit. It can distill human knowledge and wisdom, quickly create factual and normative content, enable communication between speakers of different languages, and ultimately enhance or even surpass human creativity.

As with any powerful technology, there are significant risks. At present, there are numerous mistakes and false claims, but these presumably will be ironed out. Longer term, risks include to threats to privacy; large-scale perpetuation of biases; plagiarism; displacement of human workers and creativity; deception and malicious use.

Acting ethically requires realising the value of technology while ensuring the risks are reasonable. Institutional or structural changes may be necessary, for example, labelling and creation of watermarks to prevent plagiarism.

To be human is to make your own ethical decisions

One ethical question that confronts regulators and industry is: should a driverless car swerve and sacrifice an innocent bystander (pedestrian or cyclist) to save a greater number of lives in the vehicle? I asked ChatGPT whether it is right to kill one to save five. Here is the answer:

“Whether or not it is right to kill one person to save five people is a complex ethical question. Some people may argue that taking one life to save five is the more logical option, and is therefore justified. However, it is also important to consider the ethical principle of respect for autonomy.

“This principle states that all individuals have an inherent right to make autonomous and informed decisions about their lives, including their own death. Taking this into consideration, it may not be morally right to force someone to accept the sacrifice of their life in order to save the lives of five other people. Ultimately, the answer to this question is subjective and depends on one’s own ethical and moral beliefs.”

But as driverless cars are rolled out, we won’t have the luxury of remaining on the fence, or relying on drivers’ reflexes to get us out of a decision. We will have to make moral choices and give reasons, and apply them consistently to programming the vehicles. This requires deep – and human – ethical thinking and expertise.

Saturday, November 26, 2022

Why are scientists growing human brain cells in the lab?

Hannah Flynn
Medical News Today
Originally posted 24 OCT 22

Here is an excerpt:

Ethical boundaries

One of the limitations of using organoids for research is that it is observed in vitro. The way an organ might act in a system, in connection with different organs, or when exposed to metabolites in the blood, for example, could be different from how it behaves when cells are isolated in a single tissue.

More recently, researchers placed an organoid derived from human cells inside the brain of a rat, in a study outlined in Nature.

Using neural organoids that had been allowed to self-organize, these were implanted into the somatosensory cortex — which is in the middle of the brain — of newborn rats. The scientists then found that these cortical organoids had grown axons throughout the rat brain, and were able to contribute to reward-seeking behavior in the rat.

This breakthrough suggested that the lab-created cells are recognizable to other tissues in the body and can influence systems.

Combining the cells of animals and humans is not without some ethical considerations. In fact, this has been the focus of a recent project.

The Brainstorm Organoid Project published its first paper in the form of a comment piece outlining the benefits of the project in Nature Neuroscience on October 18, 2022, the week after the aforementioned study was published.

The Project brought together prominent bioethicists as part of the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative of the US National Institutes of Health, which funded the project.

Co-author of the comment piece Dr. Jeantine E Lunshof, head of collaborative ethics at the Wyss Institute for Biologically Inspired Engineering at Harvard University, MA, told Medical News Today in an interview that existing biomedical research and animal welfare guidelines already provide a framework for this type of work to be done ethically.

Pointing to the International Society for Stem Cell Research guidelines published last year, she stated that those do cover the creation of chimeras, where cells of two species are combined.

These hybrids with non-primates are permitted, she explained: “This is very, very strong emphasis on animal welfare in this ISSCR guideline document that also aligns with existing animal welfare and animal research protocols.”

The potential benefits of this research needed to be considered, “though at this moment, we are still at the stage that a lot of fundamental research is necessary. And I think that that really must be emphasized,” she said.

Saturday, November 19, 2022

Human mini-brains were transplanted into rats. Is this ethical?

Julian Savulescu
Originally posted 22 OCT 22

Here is an excerpt:

Are 'Humanized Rats' just rats?

In a world-first, scientists have transplanted human brain cells into the brains of baby rats, offering immense possibilities to study and develop treatment for neurological and psychiatric conditions.

The human brain tissue, known as brain organoids or “mini-organs”, are independent nerve structures grown in a lab from a person’s cells, such as their skin cells, using stem cell technology. Although they can’t yet replicate a full brain, they resemble features or parts of an embryonic human brain.

The study, published in the journal Nature on Oct 12, showed that the human organoids integrated into the rat brain and function, and were even capable of affecting the behaviour of the rats.

A few months later, up to one-sixth of the rat cortex was human. In terms of their biology, they were “humanised rats”.

This is an exciting discovery for science. It will allow brain organoids to grow bigger than they have in a lab, and opens up many possibilities of understanding how early human neurons develop and form the brain, and what goes wrong in disease. It also raises the possibility of organoids being used to treat brain injury.

Indeed, the rat models showed the neuronal defects related to one rare severe disease called Timothy Syndrome, a genetic condition that affects brain development and causes severe autism.

This is one step further along the long road to making progress in brain disease, which has proved so intransigent so far.

The research must go ahead. But at the same time, it calls for new standards to be set for future research. At present, the research raises no significant new ethical issues. However, it opens the door to more elaborate or ambitious research that could raise significant ethical issues.

Moral Status of Animals with Human Tissue

The human tissue transplanted into the rats’ brains were in a region that processes sensory information such as touch and pain.

These organoids did not increase the capacities of the rats. But as larger organoids are introduced, or organoids are introduced affecting more key areas of the brain, the rat brain may acquire more advanced consciousness, including higher rational capacities or self-consciousness.

This would raise issues of how such “enhanced” rats ought to be treated. It would be important to not treat them as rats, just because they look like rats, if their brains are significantly enhanced.

This requires discussion and boundaries set around what kinds of organoids can be implanted and what key sites would be targets for enhancement of capacities that matter to moral status.

Wednesday, October 19, 2022

Technology and moral change: the transformation of truth and trust

Danaher, J., Sætra, H.S. 
Ethics Inf Technol 24, 35 (2022).


Technologies can have profound effects on social moral systems. Is there any way to systematically investigate and anticipate these potential effects? This paper aims to contribute to this emerging field on inquiry through a case study method. It focuses on two core human values—truth and trust—describes their structural properties and conceptualisations, and then considers various mechanisms through which technology is changing and can change our perspective on those values. In brief, the paper argues that technology is transforming these values by changing the costs/benefits of accessing them; allowing us to substitute those values for other, closely-related ones; increasing their perceived scarcity/abundance; and disrupting traditional value-gatekeepers. This has implications for how we study other, technologically-mediated, value changes.


Conclusion: lessons learned

Having examined our two case studies, it remains to consider whether or not there are similarities in how technology affects trust and truth, and if there are general lessons to be learned here about how technology may impact values in the future.

The two values we have considered are structurally similar and interrelated. They are both intrinsically and instrumentally valuable. They are both epistemic and practical in nature: we value truth and trust (at least in part) because they give us access to knowledge and help us to resolve the decision problems we face on a daily basis. We also see, in both case studies, similar mechanisms of value change at work. The most interesting, to our minds, are the following:
  • Technology changes the costs associated with accessing certain values, making them less or more important as a result Digital disinformation technology increases the cost of finding out the truth, but reduces the cost of finding and reinforcing a shared identity community; reliable AI and robotics gives us an (often cheaper and more efficient) substitute for trust in humans, while still giving us access to useful cognitive, emotional and physical assistance.
  • Technology makes it easier, or more attractive to trade off or substitute some values against others Digital disinformation technology allows us to obviate the need for finding out the truth and focus on other values instead; reliable machines allow us to substitute the value of reliability for the value of trust. This is a function of the plural nature of values, their scarcity, and the changing cost structure of values caused by technology.
  • Technology can make some values seem more scarce (rare, difficult to obtain), thereby increasing their perceived intrinsic value Digital disinformation makes truth more elusive, thereby increasing its perceived value which, in turn, encourages some moral communities to increase their fixation on it; robots and AI make trust in humans less instrumentally necessary, thereby increasing the expressive value of trust in others.
  • Technology can disrupt power networks, thereby altering the social gatekeepers to value to the extent that we still care about truth, digital disinformation increases the power of the epistemic elites that can help us to access the truth; trust-free or trust-alternative technologies can disrupt the power of traditional trusted third parties (professionals, experts etc.) and redistribute power onto technology or a technological elite.

Thursday, October 13, 2022

This company is about to grow new organs in a person for the first time

Jessica Hamzelou
MIT Technology Review
Originally posted 25 AUG 22

Here is an excerpt:

Livers have a unique ability to regenerate. Cut away half an animal’s liver, and it will grow back. Human livers damaged by toxins or alcohol can usually regrow too. But some diseases can cause extensive damage from which the liver can’t recover. For these diseases, the treatment of choice is usually a liver transplant.

Transplants aren’t always an option for people who are very unwell, however. That’s why Eric Lagasse and his colleagues at LyGenesis have taken this different approach. Lagasse, a stem-cell biologist at the University of Pittsburgh, has spent years researching cell-based treatments for liver disease. Around 10 years ago, he was experimenting with the idea of injecting cells from healthy livers into diseased ones in mice.

It is difficult to access the livers of small, 25-gram mice, which Lagasse was studying, so instead he and his colleagues injected the cells into the spleens of mice with liver disease. They found that the cells were able to migrate from the spleen to the liver. To find out if they could migrate from other organs, Lagasse’s team injected liver cells at various sites in the mice’s bodies.

Only a small number of mice survived. When Lagasse and his colleagues later performed autopsies on those survivors, “I was very surprised,” he recalls. “We had a mini liver present … where the lymph node would be.”

Little incubators

Lymph nodes are small, bean-shaped structures found throughout the body. They play a crucial role in our immune health, making cells that help fight infections. And while Lagasse was initially surprised that liver cells could multiply and grow in lymph nodes, it makes sense, he says. 

Lymph nodes are natural homes for rapidly dividing cells, even if those are usually immune cells. Lymph nodes also have a good blood supply, which can aid the growth of new tissue.

Tuesday, September 13, 2022

First synthetic embryos: the scientific breakthrough raises serious ethical questions

Savulescu, J., Gyngell, C., & Sawai, T.
The Conversation
Originally posted 11 AUG 22

Here is an excerpt:

Artificial wombs

In the latest study, the scientists started with collections of stem cells. The conditions created by the external uterus triggered the developmental process that makes a fetus. Although the scientists said we are a long way off synthetic human embryos, the experiment brings us closer to a future where some humans gestate their babies artificially.

Each year over 300,000 women worldwide die in childbirth or as a result of pregnancy complications, many because they lack basic care. Even in wealthy countries, pregnancy and childbirth is risky and healthcare providers are criticised for failing mothers.

There is an urgent need to make healthcare more accessible across the planet, provide better mental health support for mothers and make pregnancy and childbirth safer. In an ideal world every parent should expect excellent care in all aspects of motherhood. This technology could help treat premature babies and give at least some women a different option: a choice of whether to carry their child or use an external uterus.

Some philosophers say there is a moral imperative to develop artificial wombs to help remedy the unfairness of parenting roles. But other researchers say artificial wombs would threaten a women’s legal right to terminate a pregnancy.

Synthetic embryos and organs

In the last few years, scientists have learned more about how to coax stem cells to develop into increasingly sophisticated structures, including ones that mimic the structure and function of human organs (organoids). Artificial human kidneys, brains, hearts and more have all been created in a lab, though they are still too rudimentary for medical use.

The issue of whether there are moral differences between using stem cells to produce models of human organs for research and using stem cells to create a synthetic embryo are already playing out in law courts.

One of the key differences between organoids and synthetic embryos is their potential. If a synthetic embryo can develop into a living creature, it should have more protection than those which don’t.

Synthetic embryos do not currently have potential to actually create a living mouse. If scientists did make human synthetic embryos, but without the potential to form a living being, they should arguably be treated similarly to organoids.

Saturday, September 3, 2022

‘The entire protein universe’: AI predicts shape of nearly every known protein

Ewen Callaway
Nature (608)
Posted with correction 29 July 22

From today, determining the 3D shape of almost any protein known to science will be as simple as typing in a Google search.

Researchers have used AlphaFold — the revolutionary artificial-intelligence (AI) network — to predict the structures of more than 200 million proteins from some 1 million species, covering almost every known protein on the planet.

The data dump is freely available on a database set up by DeepMind, the London-based AI company, owned by Google, that developed AlphaFold, and the European Molecular Biology Laboratory’s European Bioinformatics Institute (EMBL–EBI), an intergovernmental organization near Cambridge, UK.

“Essentially you can think of it covering the entire protein universe,” DeepMind chief executive Demis Hassabis said at a press briefing. “We’re at the beginning of a new era of digital biology.”

The 3D shape, or structure, of a protein is what determines its function in cells. Most drugs are designed using structural information, and the creation of accurate maps of proteins’ amino-acid arrangement is often the first step to making discoveries about how proteins work.

DeepMind developed the AlphaFold network using an AI technique called deep learning, and the AlphaFold database was launched a year ago with more than 350,000 structure predictions covering nearly every protein made by humans, mice and 19 other widely studied organisms. The catalogue has since swelled to around 1 million entries.

“We’re bracing ourselves for the release of this huge trove,” says Christine Orengo, a computational biologist at University College London, who has used the AlphaFold database to identify new families of proteins. “Having all the data predicted for us is just fantastic.”


But such entries tend to be skewed toward human, mouse and other mammalian proteins, Porta says. It’s likely that the AlphaFold dump will add significant knowledge, because it includes such a diverse range of organisms. “It’s going to be an awesome resource. And I’m probably going to download it as soon as it comes out,” says Porta.

Wednesday, October 27, 2021

Reflective Reasoning & Philosophy

Nick Byrd
Philosophy Compass
First published: 29 September 2021


Philosophy is a reflective activity. So perhaps it is unsurprising that many philosophers have claimed that reflection plays an important role in shaping and even improving our philosophical thinking. This hypothesis seems plausible given that training in philosophy has correlated with better performance on tests of reflection and reflective test performance has correlated with demonstrably better judgments in a variety of domains. This article reviews the hypothesized roles of reflection in philosophical thinking as well as the empirical evidence for these roles. This reveals that although there are reliable links between reflection and philosophical judgment among both laypeople and philosophers, the role of reflection in philosophical thinking may nonetheless depend in part on other factors, some of which have yet to be determined. So progress in research on reflection in philosophy may require further innovation in experimental methods and psychometric validation of philosophical measures.

From the Conclusion

Reflective reasoning is central to both philosophy and the cognitive science thereof. The theoretical and empirical research about reflection and its relation to philosophical thinking is voluminous. The existing findings provide preliminary evidence that reflective reasoning may be related to tendencies for certain philosophical judgments and beliefs over others. However, there are some signs that there is more to the story about reflection’s role in philosophical thinking than our current evidence can reveal. Scholars will need to continue developing new hypotheses, methods, and interpretations to reveal these hitherto latent details.

The recommendations in this article are by no means exhaustive. For instance, in addition to better experimental manipulations and measures of reflection (Byrd, 2021b), philosophers and cognitive scientists will also need to validate their measures of philosophical thinking to ensure that subtle differences in wording of thought experiments do not influence people’s judgments in unexpected ways (Cullen, 2010). After all, philosophical judgments can vary significantly depending on slight differences in wording even when reflection is not manipulated (e.g., Nahmias, Coates, & Kvaran, 2007). Scholars may also need to develop ways to empirically dissociate previously conflated philosophical judgments (Conway & Gawronski, 2013) in order to prevent and clarify misleading results (Byrd & Conway, 2019; Conway, GoldsteinGreenwood, Polacek, & Greene, 2018).

Thursday, July 29, 2021

Technology in the Age of Innovation: Responsible Innovation as a New Subdomain Within the Philosophy of Technology

von Schomberg, L., Blok, V. 
Philos. Technol. 34, 309–323 (2021). 


Praised as a panacea for resolving all societal issues, and self-evidently presupposed as technological innovation, the concept of innovation has become the emblem of our age. This is especially reflected in the context of the European Union, where it is considered to play a central role in both strengthening the economy and confronting the current environmental crisis. The pressing question is how technological innovation can be steered into the right direction. To this end, recent frameworks of Responsible Innovation (RI) focus on how to enable outcomes of innovation processes to become societally desirable and ethically acceptable. However, questions with regard to the technological nature of these innovation processes are rarely raised. For this reason, this paper raises the following research question: To what extent is RI possible in the current age, where the concept of innovation is predominantly presupposed as technological innovation? On the one hand, we depart from a post-phenomenological perspective to evaluate the possibility of RI in relation to the particular technological innovations discussed in the RI literature. On the other hand, we emphasize the central role innovation plays in the current age, and suggest that the presupposed concept of innovation projects a techno-economic paradigm. In doing so, we ultimately argue that in the attempt to steer innovation, frameworks of RI are in fact steered by the techno-economic paradigm inherent in the presupposed concept of innovation. Finally, we account for what implications this has for the societal purpose of RI.

The Conclusion

Hence, even though RI provides a critical analysis of innovation at the ontic level (i.e., concerning the introduction and usage of particular innovations), it still lacks a critical analysis at the ontological level (i.e., concerning the techno-economic paradigm of innovation). Therefore, RI is in need of a fundamental reflection that not only exposes the techno-economic paradigm of innovation—which we did in this paper—but that also explores an alternative concept of innovation which addresses the public good beyond the current privatization wave. The political origins of innovation that we encountered in Section 2, along with the political ends that the RI literature explicitly prioritizes, suggest that we should inquire into a political orientation of innovation. A crucial task of this inquiry would be to account for what such a political orientation of innovation precisely entails at the ontic level, and how it relates to the current techno-economic paradigm of innovation at the ontological level.

Saturday, July 24, 2021

Freezing Eggs and Creating Patients: Moral Risks of Commercialized Fertility

E. Reis & S. Reis-Dennis
The Hastings Center Report
Originally published 24 Nov 17


There's no doubt that reproductive technologies can transform lives for the better. Infertile couples and single, lesbian, gay, intersex, and transgender people have the potential to form families in ways that would have been inconceivable years ago. Yet we are concerned about the widespread commercialization of certain egg-freezing programs, the messages they propagate about motherhood, the way they blur the line between care and experimentation, and the manipulative and exaggerated marketing that stretches the truth and inspires false hope in women of various ages. We argue that although reproductive technology, and egg freezing in particular, promise to improve women's care by offering more choices to achieve pregnancy and childbearing, they actually have the potential to be disempowering. First, commercial motives in the fertility industry distort women's medical deliberations, thereby restricting their autonomy; second, having the option to freeze their eggs can change the meaning of women's reproductive choices in a way that is limiting rather than liberating.

Here is an excerpt:

Egg banks are offering presumably fertile women a solution for potential infertility that they may never face. These women might pay annual egg-freezing storage rates but never use their eggs. In fact, even if a woman who froze eggs in her early twenties waited until her late thirties to use them, there can be no guarantee that those eggs would produce a viable pregnancy. James A. Grifo, program director of NYU Langone Health Fertility Center, has speculated, “[T]here have been reports of embryos that have been frozen for over 15 years making babies, and we think the same thing is going to be true of eggs.” But the truth is that the technology is so new that neither he nor we know how frozen eggs will hold up over a long period of time.

Some women in their twenties might want to hedge their bets against future infertility by freezing their eggs as a part of an egg-sharing program; others might hope to learn from a simple home test of hormone levels whether their egg supply (ovarian reserve) is low—a relatively rare condition. However, these tests are not foolproof. The ASRM has cautioned against home tests of ovarian reserve for women in their twenties because it may lead to “false reassurance or unnecessary anxiety and concern.” This kind of medicalization of fertility may not be liberating; instead, it will exert undue pressure on women and encourage them to rely on egg freezing over other reproductive options when it is far from guaranteed that those frozen eggs (particularly if the women have the condition known as premature ovarian aging) will ultimately lead to successful pregnancies and births.

Thursday, July 22, 2021

The Possibility of an Ongoing Moral Catastrophe

Williams, E.G. (2015).
Ethic Theory Moral Prac 18, 
971–982 (2015). 


This article gives two arguments for believing that our society is unknowingly guilty of serious, large-scale wrongdoing. First is an inductive argument: most other societies, in history and in the world today, have been unknowingly guilty of serious wrongdoing, so ours probably is too. Second is a disjunctive argument: there are a large number of distinct ways in which our practices could turn out to be horribly wrong, so even if no particular hypothesized moral mistake strikes us as very likely, the disjunction of all such mistakes should receive significant credence. The article then discusses what our society should do in light of the likelihood that we are doing something seriously wrong: we should regard intellectual progress, of the sort that will allow us to find and correct our moral mistakes as soon as possible, as an urgent moral priority rather than as a mere luxury; and we should also consider it important to save resources and cultivate flexibility, so that when the time comes to change our policies we will be able to do so quickly and smoothly.

Saturday, June 19, 2021

Preparing for the Next Generation of Ethical Challenges Concerning Heritable Human Genome Editing

Robert Klitzman
The American Journal of Bioethics
(2021) Volume 21 (6), 1-4.

Here is the conclusion

Moving Forward

Policymakers will thus need to make complex and nuanced risk/benefit calculations regarding costs and extents of treatments, ages of onset, severity of symptoms, degrees of genetic penetrance, disease prevalence, future scientific benefits, research costs, appropriate allocations of limited resources, and questions of who should pay.

Future efforts should thus consider examining scientific and ethical challenges in closer conjunction, not separated off, and bring together the respective strengths of the Commission’s and of the WHO Committee’s approaches. The WHO Committee includes broader stakeholders, but does not yet appear to have drawn conclusions regarding such specific medical and scientific scenarios (WHO 2020). These two groups’ respective memberships also differ in instructive ways that can mutually inform future deliberations. Among the Commission’s 18 chairs and members, only two appear to work primarily in ethics or policy; the majority are scientists (National Academy of Medicine, the National Academies of Sciences and the Royal Society 2020). In contrast, the WHO Committee includes two chairs and 16 members, with both chairs and the majority of members working primarily in ethics, policy or law (WHO 2020). ASRM and other countries’ relevant professional organizations should also stipulate that physicians and healthcare professionals should not be involved in any way in the care of patients using germline editing abroad.

The Commission’s Report thus provides valuable insights and guidelines, but multiple stakeholders will likely soon confront additional, complex dilemmas involving interplays of both science and ethics that also need urgent attention.

Tuesday, June 1, 2021

We Must Rethink the Role of Medical Expert Witnesses

Amitha Kalaichandran
Scientific American
Originally posted 5 May 21

Here are two excerpts:

The second issue is that the standard used by the courts to assess whether an expert witness’s scientific testimony can be included differs by state. Several states (including Minnesota) use the Frye Rule, established in 1923, which asks whether the expert’s assessment is generally accepted by the scientific community that specializes in this narrow field of expertise. Federally, and in several other states, the Daubert Standard of 1993 is used, which dictates the expert show their scientific reasoning (so the determination of validity is left to the courts), though acceptance within the scientific community is still a factor. Each standard has its drawbacks. For instance, in Frye, the expert’s community could be narrowly drawn by the legal team in a way that helps bolster the expert’s outdated or rare perspective, and the Daubert standard presumes that the judge and jury have an understanding of the science in order to independently assess scientific validity. Some states also strictly apply the standard, whereas others are more flexible. (The Canadian approach is derived from the case R v. Mohan, which states the expert be qualified and their testimony be relevant, but the test for “reliability” is left to the courts).

Third, when it comes to assessments of cause of death specifically, understanding the distinction between necessary and sufficient is important. Juries can have a hard time teasing out the difference. In the Chauvin trial, the medical expert witnesses testifying on behalf of the prosecution were aligned in their assessment of what killed Floyd: the sustained pressure of the officer’s knee on Floyd’s neck (note that asphyxia is a common cause of cardiac arrest). However, David Fowler, the medical expert witness for the defense, suggested the asphyxia was secondary to heart disease and drug intoxication as meaningful contributors to his death.


Another improvement could involve ensuring that courts institute a more stringent application and selection process, in which medical expert witnesses would be required to demonstrate their clinical and research competence related to the specific issues in a case, and where their abilities are recognized by their professional group. For example, the American College of Cardiology could endorse a cardiologist as a leader in a relevant subspecialty—a similar approach has been suggested as a way to reform medical expert witness testimony by emergency physicians. One drawback, according to Faigman, is that courts would be unlikely to fully abdicate their role in evaluating expertise.

Wednesday, October 21, 2020

Neurotechnology can already read minds: so how do we protect our thoughts?

Rafael Yuste
El Pais
Originally posted 11 Sept 20

Here is an excerpt:

On account of these and other developments, a group of 25 scientific experts – clinical engineers, psychologists, lawyers, philosophers and representatives of different brain projects from all over the world – met in 2017 at Columbia University, New York, and proposed ethical rules for the use of these neurotechnologies. We believe we are facing a problem that affects human rights, since the brain generates the mind, which defines us as a species. At the end of the day, it is about our essence – our thoughts, perceptions, memories, imagination, emotions and decisions.

To protect citizens from the misuse of these technologies, we have proposed a new set of human rights, called “neurorights.” The most urgent of these to establish is the right to the privacy of our thoughts, since the technologies for reading mental activity are more developed than the technologies for manipulating it.

To defend mental privacy, we are working on a three-pronged approach. The first consists of legislating “neuroprotection.” We believe that data obtained from the brain, which we call “neurodata,” should be rigorously protected by laws similar to those applied to organ donations and transplants. We ask that “neurodata” not be traded and only be extracted with the consent of the individual for medical or scientific purposes.

This would be a preventive measure to protect against abuse. The second approach involves the proposal of proactive ideas; for example, that the companies and organizations that manufacture these technologies should adhere to a code of ethics from the outset, just as doctors do with the Hippocratic Oath. We are working on a “technocratic oath” with Xabi Uribe-Etxebarria, founder of the artificial intelligence company Sherpa.ai, and with the Catholic University of Chile.

The third approach involves engineering, and consists of developing both hardware and software so that brain “neurodata” remains private, and that it is possible to share only select information. The aim is to ensure that the most personal data never leaves the machines that are wired to our brain. One option is to systems that are already used with financial data: open-source files and blockchain technology so that we always know where it came from, and smart contracts to prevent data from getting into the wrong hands. And, of course, it will be necessary to educate the public and make sure that no device can use a person’s data unless he or she authorizes it at that specific time.

Tuesday, December 17, 2019

We Might Soon Build AI Who Deserve Rights

Image result for robot rightsEric Schweitzengebel
Splintered Mind Blog
From a Talk at Notre Dame
Originally posted 17 Nov 19


Within a few decades, we will likely create AI that a substantial proportion of people believe, whether rightly or wrongly, deserve human-like rights. Given the chaotic state of consciousness science, it will be genuinely difficult to know whether and when machines that seem to deserve human-like moral status actually do deserve human-like moral status. This creates a dilemma: Either give such ambiguous machines human-like rights or don't. Both options are ethically risky. To give machines rights that they don't deserve will mean sometimes sacrificing human lives for the benefit of empty shells. Conversely, however, failing to give rights to machines that do deserve rights will mean perpetrating the moral equivalent of slavery and murder. One or another of these ethical disasters is probably in our future.


But as AI gets cuter and more sophisticated, and as chatbots start sounding more and more like normal humans, passing more and more difficult versions of the Turing Test, these movements will gain steam among the people with liberal views of consciousness. At some point, people will demand serious rights for some AI systems. The AI systems themselves, if they are capable of speech or speechlike outputs, might also demand or seem to demand rights.

Let me be clear: This will occur whether or not these systems really are conscious. Even if you’re very conservative in your view about what sorts of systems would be conscious, you should, I think, acknowledge the likelihood that if technological development continues on its current trajectory there will eventually be groups of people who assert the need for us to give AI systems human-like moral consideration.

And then we’ll need a good, scientifically justified consensus theory of consciousness to sort it out. Is this system that says, “Hey, I’m conscious, just like you!” really conscious, just like you? Or is it just some empty algorithm, no more conscious than a toaster?

Here’s my conjecture: We will face this social problem before we succeed in developing the good, scientifically justified consensus theory of consciousness that we need to solve the problem. We will then have machines whose moral status is unclear. Maybe they do deserve rights. Maybe they really are conscious like us. Or maybe they don’t. We won’t know.

And then, if we don’t know, we face quite a terrible dilemma.

If we don’t give these machines rights, and if turns out that the machines really do deserve rights, then we will be perpetrating slavery and murder every time we assign a task and delete a program.

The blog post is here.