Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Friday, March 15, 2024

The consciousness wars: can scientists ever agree on how the mind works?

Mariana Lenharo

Neuroscientist Lucia Melloni didn’t expect to be reminded of her parents’ divorce when she attended a meeting about consciousness research in 2018. But, much like her parents, the assembled academics couldn’t agree on anything.

The group of neuroscientists and philosophers had convened at the Allen Institute for Brain Science in Seattle, Washington, to devise a way to empirically test competing theories of consciousness against each other: a process called adversarial collaboration.

Devising a killer experiment was fraught. “Of course, each of them was proposing experiments for which they already knew the expected results,” says Melloni, who led the collaboration and is based at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany. Melloni, falling back on her childhood role, became the go-between.

The collaboration Melloni is leading is one of five launched by the Templeton World Charity Foundation, a philanthropic organization based in Nassau, the Bahamas. The charity funds research into topics such as spirituality, polarization and religion; in 2019, it committed US$20 million to the five projects.

The aim of each collaboration is to move consciousness research forward by getting scientists to produce evidence that supports one theory and falsifies the predictions of another. Melloni’s group is testing two prominent ideas: integrated information theory (IIT), which claims that consciousness amounts to the degree of ‘integrated information’ generated by a system such as the human brain; and global neuronal workspace theory (GNWT), which claims that mental content, such as perceptions and thoughts, becomes conscious when the information is broadcast across the brain through a specialized network, or workspace. She and her co-leaders had to mediate between the main theorists, and seldom invited them into the same room.
---------------
Here's what the article highlights:
  • Divisions abound: Researchers disagree on the very definition of consciousness, making comparisons between theories difficult. Some focus on subjective experience, while others look at the brain's functions.
  • Testing head-to-head: New research projects are directly comparing competing theories to see which one explains experimental data better. This could be a step towards finding a unifying explanation.
  • Heated debate: The recent critique of one prominent theory, Integrated Information Theory (IIT), shows the depth of the disagreements. Some question its scientific validity, while others defend it as a viable framework.
  • Hope for progress: Despite the disagreements, there's optimism. New research methods and a younger generation of researchers focused on collaboration could lead to breakthroughs in understanding this elusive phenomenon.

Thursday, January 4, 2024

Americans’ Trust in Scientists, Positive Views of Science Continue to Decline

Brian Kennedy & Alec Tyson
Pew Research
Originally published 14 NOV 23

Impact of science on society

Overall, 57% of Americans say science has had a mostly positive effect on society. This share is down 8 percentage points since November 2021 and down 16 points since before the start of the coronavirus outbreak.

About a third (34%) now say the impact of science on society has been equally positive as negative. A small share (8%) think science has had a mostly negative impact on society.

Trust in scientists

When it comes to the standing of scientists, 73% of U.S. adults have a great deal or fair amount of confidence in scientists to act in the public’s best interests. But trust in scientists is 14 points lower than it was at the early stages of the pandemic.

The share expressing the strongest level of trust in scientists – saying they have a great deal of confidence in them – has fallen from 39% in 2020 to 23% today.

As trust in scientists has fallen, distrust has grown: Roughly a quarter of Americans (27%) now say they have not too much or no confidence in scientists to act in the public’s best interests, up from 12% in April 2020.

Ratings of medical scientists mirror the trend seen in ratings of scientists generally. Read Chapter 1 of the report for a detailed analysis of this data.

Differences between Republicans and Democrats in ratings of scientists and science

Declining levels of trust in scientists and medical scientists have been particularly pronounced among Republicans and Republican-leaning independents over the past several years. In fact, nearly four-in-ten Republicans (38%) now say they have not too much or no confidence at all in scientists to act in the public’s best interests. This share is up dramatically from the 14% of Republicans who held this view in April 2020. Much of this shift occurred during the first two years of the pandemic and has persisted in more recent surveys.


My take on why this important:

Science is a critical driver of progress. From technological advancements to medical breakthroughs, scientific discoveries have dramatically improved our lives. Without public trust in science, these advancements may slow or stall.

Science plays a vital role in addressing complex challenges. Climate change, pandemics, and other pressing issues demand evidence-based solutions. Undermining trust in science weakens our ability to respond effectively to these challenges.

Erosion of trust can have far-reaching consequences. It can fuel misinformation campaigns, hinder scientific collaboration, and ultimately undermine public health and well-being.

Friday, October 27, 2023

Theory of consciousness branded 'pseudoscience' by neuroscientists

Clare Wilson
New Scientist
Originally posted 19 Sept 23

Consciousness is one of science’s deepest mysteries; it is considered so difficult to explain how physical entities like brain cells produce subjective sensory experiences, such as the sensation of seeing the colour red, that this is sometimes called “the hard problem” of science.

While the question has long been investigated by studying the brain, IIT came from considering the mathematical structure of information-processing networks and could also apply to animals or artificial intelligence.

It says that a network or system has a higher level of consciousness if it is more densely interconnected, such that the interactions between its connection points or nodes yield more information than if it is reduced to its component parts.

IIT predicts that it is theoretically possible to calculate a value for the level of consciousness, termed phi, of any network with known structure and functioning. But as the number of nodes within a network grows, the sums involved get exponentially bigger, meaning that it is practically impossible to calculate phi for the human brain – or indeed any information-processing network with more than about 10 nodes.

(cut)

Giulio Tononi at the University of Wisconsin-Madison, who first developed IIT and took part in the recent testing, did not respond to New Scientist’s requests for comment. But Johannes Fahrenfort at VU Amsterdam in the Netherlands, who was not involved in the recent study, says the letter went too far. “There isn’t a lot of empirical support for IIT. But that doesn’t warrant calling it pseudoscience.”

Complicating matters, there is no single definition of pseudoscience. But ITT is not in the same league as astrology or homeopathy, says James Ladyman at the University of Bristol in the UK. “It looks like a serious attempt to understand consciousness. It doesn’t make it a theory pseudoscience just because some people are making exaggerated claims.”


Summary:

A group of 124 neuroscientists, including prominent figures in the field, have criticized the integrated information theory (IIT) of consciousness in an open letter. They argue that recent experimental evidence said to support IIT didn't actually test its core ideas and is practically impossible to perform. IIT suggests that the level of consciousness, called "phi," can be calculated for any network with known structure and functioning, but this becomes impractical for networks with many nodes, like the human brain. Some critics believe that IIT has been overhyped and may have unintended consequences for policies related to consciousness in fetuses and animals. However, not all experts consider IIT pseudoscience, with some seeing it as a serious attempt to understand consciousness.

The debate surrounding the integrated information theory (IIT) of consciousness is a complex one. While it's clear that the recent experimental evidence has faced criticism for not directly testing the core ideas of IIT, it's important to recognize that the study of consciousness is a challenging and ongoing endeavor.

Consciousness is indeed one of science's profound mysteries, often referred to as "the hard problem." IIT, in its attempt to address this problem, has sparked valuable discussions and research. It may not be pseudoscience, but the concerns raised about overhyping its findings are valid. It's crucial for scientific theories to be communicated accurately to avoid misinterpretation and potential policy implications.

Ultimately, the study of consciousness requires a multidisciplinary approach and the consideration of various theories, and it's important to maintain a healthy skepticism while promoting rigorous scientific inquiry in this complex field.

Sunday, August 20, 2023

When Scholars Sue Their Accusers. Francesca Gino is the Latest. Such Litigation Rarely Succeeds.

Adam Marcus and Ivan Oransky
The Chronicle of Higher Education
Originally posted 18 AUG 23

Francesca Gino has made headlines twice since June: once when serious allegations of misconduct involving her work became public, and again when she filed a $25-million lawsuit against her accusers, including Harvard University, where she is a professor at the business school.

The suit itself met with a barrage of criticism from those who worried that, as one scientist put it, it would have a “chilling effect on fraud detection.” A smaller number of people supported the move, saying that Harvard and her accusers had abandoned due process and that they believed in Gino’s integrity.How the case will play out, of course, remains to be seen. But Gino is hardly the first researcher to sue her critics and her employer when faced with misconduct findings. As the founders of Retraction Watch, a website devoted to covering problems in the scientific literature, we’ve reported many of these kinds of cases since we launched our blog in 2010. Platintiffs tend to claim defamation, but sometimes sue over wrongful termination or employment discrimination, and these kinds of cases typically end up in federal courts. A look at how some other suits fared might yield recommendations for how to limit the pain they can cause.The first thing to know about defamation and employment suits is that most plaintiffs, but not all, lose. Mario Saad, a diabetes researcher at Brazil’s Unicamp, found that out when he sued the American Diabetes Association in the very same federal district court in Massachusetts where Gino filed her case.Saad was trying to prevent Diabetes, the flagship research journal of the American Diabetes Association, from publishing expressions of concern about four of his papers following allegations of image manipulation. He lost that effort in 2015, and has now had 18 papers retracted.

(cut)

Such cases can be extremely expensive — not only for the defense, whether the costs are borne by institutions or insurance companies, but also for the plaintiffs. Ask Carlo Croce and Mark Jacobson.

Croce, a cancer researcher at Ohio State University, has at various points sued The New York Times, a Purdue University biologist named David Sanders, and Ohio State. He has lost all of those cases, including on appeal. The suits against the Times and Sanders claimed that a front-page story in 2017 that quoted Sanders had defamed Croce. His suit against Ohio State alleged that he had been improperly removed as department chair.

Croce racked up some $2 million in legal bills — and was sued for nonpayment. A judge has now ordered Croce’s collection of old masters paintings to be seized and sold for the benefit of his lawyers, and has also garnished Croce’s bank accounts. Another judgment means that his lawyers may now foreclose on his house to recoup their costs. Ohio State has been garnishing his wages since March by about $15,600 each month, or about a quarter of his paycheck. He continues to earn more than $800,000 per year from the university, even after a professorship and the chair were taken away from him.

When two researchers published a critique of the work of Mark Jacobson, an energy researcher at Stanford University, in the Proceedings of the National Academy of Sciences, Jacobson sued them along with the journal’s publisher for $10 million. He dropped the case just months after filing it.

But thanks to a so-called anti-SLAPP statute, “designed to provide for early dismissal of meritless lawsuits filed against people for the exercise of First Amendment rights,” a judge has ordered Jacobson to pay $500,000 in legal fees to the defendants. Jacobson wants Stanford to pay those costs, and California’s labor commissioner said the university had to pay at least some of them because protecting his reputation was part of Jacobson’s job. The fate of those fees, and who will pay them, is up in the air, with Jacobson once again appealing the judgment against him.

Monday, May 8, 2023

What Thomas Kuhn Really Thought about Scientific "Truth"

John Horgan
Scientific American
Originally posted 23 May 12

Here are two excerpts:

Denying the view of science as a continual building process, Kuhn held that a revolution is a destructive as well as a creative act. The proposer of a new paradigm stands on the shoulders of giants (to borrow Newton's phrase) and then bashes them over the head. He or she is often young or new to the field, that is, not fully indoctrinated. Most scientists yield to a new paradigm reluctantly. They often do not understand it, and they have no objective rules by which to judge it. Different paradigms have no common standard for comparison; they are "incommensurable," to use Kuhn's term. Proponents of different paradigms can argue forever without resolving their basic differences because they invest basic terms—motion, particle, space, time—with different meanings. The conversion of scientists is thus both a subjective and political process. It may involve sudden, intuitive understanding—like that finally achieved by Kuhn as he pondered Aristotle. Yet scientists often adopt a paradigm simply because it is backed by others with strong reputations or by a majority of the community.

Kuhn's view diverged in several important respects from the philosophy of Karl Popper, who held that theories can never be proved but only disproved, or "falsified." Like other critics of Popper, Kuhn argued that falsification is no more possible than verification; each process wrongly implies the existence of absolute standards of evidence, which transcend any individual paradigm. A new paradigm may solve puzzles better than the old one does, and it may yield more practical applications. "But you cannot simply describe the other science as false," Kuhn said. Just because modern physics has spawned computers, nuclear power and CD players, he suggested, does not mean it is truer, in an absolute sense, than Aristotle's physics. Similarly, Kuhn denied that science is constantly approaching the truth. At the end of Structure he asserted that science, like life on earth, does not evolve toward anything but only away from something.

(cut)

Kuhn declared that, although his book was not intended to be pro-science, he is pro-science. It is the rigidity and discipline of science, Kuhn said, that makes it so effective at problem-solving. Moreover, science produces "the greatest and most original bursts of creativity" of any human enterprise. Kuhn conceded that he was partly to blame for some of the anti-science interpretations of his model. After all, in Structure he did call scientists committed to a paradigm "addicts"; he also compared them to the brainwashed characters in Orwell's 1984. Kuhn insisted that he did not mean to be condescending by using terms such as "mopping up" or "puzzle-solving" to describe what most scientists do. "It was meant to be descriptive." He ruminated a bit. "Maybe I should have said more about the glories that result from puzzle solving, but I thought I was doing that."

As for the word "paradigm," Kuhn conceded that it had become "hopelessly overused" and is "out of control." Like a virus, the word spread beyond the history and philosophy of science and infected the intellectual community at large, where it came to signify virtually any dominant idea. A 1974 New Yorker cartoon captured the phenomena. "Dynamite, Mr. Gerston!" gushed a woman to a smug-looking man. "You're the first person I ever heard use 'paradigm' in real life." The low point came during the Bush administration, when White House officials introduced an economic plan called "the New Paradigm" (which was really just trickle-down economics).

Saturday, November 26, 2022

Why are scientists growing human brain cells in the lab?

Hannah Flynn
Medical News Today
Originally posted 24 OCT 22

Here is an excerpt:

Ethical boundaries

One of the limitations of using organoids for research is that it is observed in vitro. The way an organ might act in a system, in connection with different organs, or when exposed to metabolites in the blood, for example, could be different from how it behaves when cells are isolated in a single tissue.

More recently, researchers placed an organoid derived from human cells inside the brain of a rat, in a study outlined in Nature.

Using neural organoids that had been allowed to self-organize, these were implanted into the somatosensory cortex — which is in the middle of the brain — of newborn rats. The scientists then found that these cortical organoids had grown axons throughout the rat brain, and were able to contribute to reward-seeking behavior in the rat.

This breakthrough suggested that the lab-created cells are recognizable to other tissues in the body and can influence systems.

Combining the cells of animals and humans is not without some ethical considerations. In fact, this has been the focus of a recent project.

The Brainstorm Organoid Project published its first paper in the form of a comment piece outlining the benefits of the project in Nature Neuroscience on October 18, 2022, the week after the aforementioned study was published.

The Project brought together prominent bioethicists as part of the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative of the US National Institutes of Health, which funded the project.

Co-author of the comment piece Dr. Jeantine E Lunshof, head of collaborative ethics at the Wyss Institute for Biologically Inspired Engineering at Harvard University, MA, told Medical News Today in an interview that existing biomedical research and animal welfare guidelines already provide a framework for this type of work to be done ethically.

Pointing to the International Society for Stem Cell Research guidelines published last year, she stated that those do cover the creation of chimeras, where cells of two species are combined.

These hybrids with non-primates are permitted, she explained: “This is very, very strong emphasis on animal welfare in this ISSCR guideline document that also aligns with existing animal welfare and animal research protocols.”

The potential benefits of this research needed to be considered, “though at this moment, we are still at the stage that a lot of fundamental research is necessary. And I think that that really must be emphasized,” she said.

Saturday, November 19, 2022

Human mini-brains were transplanted into rats. Is this ethical?

Julian Savulescu
channelnewsasia.com
Originally posted 22 OCT 22

Here is an excerpt:

Are 'Humanized Rats' just rats?

In a world-first, scientists have transplanted human brain cells into the brains of baby rats, offering immense possibilities to study and develop treatment for neurological and psychiatric conditions.

The human brain tissue, known as brain organoids or “mini-organs”, are independent nerve structures grown in a lab from a person’s cells, such as their skin cells, using stem cell technology. Although they can’t yet replicate a full brain, they resemble features or parts of an embryonic human brain.

The study, published in the journal Nature on Oct 12, showed that the human organoids integrated into the rat brain and function, and were even capable of affecting the behaviour of the rats.

A few months later, up to one-sixth of the rat cortex was human. In terms of their biology, they were “humanised rats”.

This is an exciting discovery for science. It will allow brain organoids to grow bigger than they have in a lab, and opens up many possibilities of understanding how early human neurons develop and form the brain, and what goes wrong in disease. It also raises the possibility of organoids being used to treat brain injury.

Indeed, the rat models showed the neuronal defects related to one rare severe disease called Timothy Syndrome, a genetic condition that affects brain development and causes severe autism.

This is one step further along the long road to making progress in brain disease, which has proved so intransigent so far.

The research must go ahead. But at the same time, it calls for new standards to be set for future research. At present, the research raises no significant new ethical issues. However, it opens the door to more elaborate or ambitious research that could raise significant ethical issues.

Moral Status of Animals with Human Tissue

The human tissue transplanted into the rats’ brains were in a region that processes sensory information such as touch and pain.

These organoids did not increase the capacities of the rats. But as larger organoids are introduced, or organoids are introduced affecting more key areas of the brain, the rat brain may acquire more advanced consciousness, including higher rational capacities or self-consciousness.

This would raise issues of how such “enhanced” rats ought to be treated. It would be important to not treat them as rats, just because they look like rats, if their brains are significantly enhanced.

This requires discussion and boundaries set around what kinds of organoids can be implanted and what key sites would be targets for enhancement of capacities that matter to moral status.

Wednesday, October 12, 2022

Gender-diverse teams produce more novel and higher-impact scientific ideas

Yang, Y., Tian, T. Y., et al. (2022, August 29). 
Proceedings of the National Academy of Sciences, 119(36).
https://doi.org/10.1073/pnas.2200841119

Abstract

Science’s changing demographics raise new questions about research team diversity and research outcomes. We study mixed-gender research teams, examining 6.6 million papers published across the medical sciences since 2000 and establishing several core findings. First, the fraction of publications by mixed-gender teams has grown rapidly, yet mixed-gender teams continue to be underrepresented compared to the expectations of a null model. Second, despite their underrepresentation, the publications of mixed-gender teams are substantially more novel and impactful than the publications of same-gender teams of equivalent size. Third, the greater the gender balance on a team, the better the team scores on these performance measures. Fourth, these patterns generalize across medical subfields. Finally, the novelty and impact advantages seen with mixed-gender teams persist when considering numerous controls and potential related features, including fixed effects for the individual researchers, team structures, and network positioning, suggesting that a team’s gender balance is an underrecognized yet powerful correlate of novel and impactful scientific discoveries.

Significance

Science teams made up of men and women produce papers that are more novel and highly cited than those of all-men or all-women teams. These performance advantages increase the greater the team’s gender balance and appear nearly universal. On average, they hold for small and large teams, the 45 subfields of medicine, and women- or men-led teams and generalize to published papers in all science fields over the last 20 y. Notwithstanding these benefits, gender-diverse teams remain underrepresented in science when compared to what is expected if the teams in the data had been formed without regard to gender. These findings reveal potentially new gender and teamwork synergies that correlate with scientific discoveries and inform diversity, equity, and inclusion (DEI) initiatives.

Discussion

Conducting an analysis of 6.6 million published papers from more than 15,000 different medical journals worldwide, we find that mixed-gender teams—teams combining women and men scientists—produce more novel and more highly cited papers than all-women or all-men teams. Mixed-gender teams publish papers that are up to 7% more novel and 14.6% more likely to be upper-tail papers than papers published by same-gender teams, results that are robust to numerous institutional, team, and individual controls and further generalize by subfield. Finally, in exploring gender in science through the lens of teamwork, the results point to a potentially transformative approach for thinking about and capturing the value of gender diversity in science.

Another key finding of this work is that mixed-gender teams are significantly underrepresented compared to what would be expected by chance. This underrepresentation is all the more striking given the findings that gender-diverse teams produce more novel and high-impact research and suggests that gender-diverse teams may have substantial untapped potential for medical research. Nevertheless, the underrepresentation of gender-diverse teams may reflect research showing that women receive less credit for their successes than do men teammates, which in turn inhibits the formation of gender-diverse teams and women’s success in receiving grants, prizes, and promotions.

Saturday, October 8, 2022

Preventing an AI-related catastrophe

Benjamin Hilton
8000 Hours
Originally Published August 25th, 2022

Summary

We expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). We think more work needs to be done to reduce these risks.

Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity. There have not yet been any satisfying answers to concerns — discussed below — about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is very neglected, and may well be tractable. We estimate that there are around 300 people worldwide working directly on this.3 As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute.

Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. If worthwhile policies are developed, we’ll need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more — some of which we list below.

(cut)

When can we expect transformative AI?

It’s difficult to predict exactly when we will develop AI that we expect to be hugely transformative for society (for better or for worse) — for example, by automating all human work or drastically changing the structure of society. But here we’ll go through a few approaches.

One option is to survey experts. Data from the 2019 survey of 300 AI experts implies that there is 20% probability of human-level machine intelligence (which would plausibly be transformative in this sense) by 2036, 50% probability by 2060, and 85% by 2100. There are a lot of reasons to be suspicious of these estimates,8 but we take it as one data point.

Ajeya Cotra (a researcher at Open Philanthropy) attempted to forecast transformative AI by comparing modern deep learning to the human brain. Deep learning involves using a huge amount of compute to train a model, before that model is able to perform some task. There’s also a relationship between the amount of compute used to train a model and the amount used by the model when it’s run. And — if the scaling hypothesis is true — we should expect the performance of a model to predictably improve as the computational power used increases. So Cotra used a variety of approaches (including, for example, estimating how much compute the human brain uses on a variety of tasks) to estimate how much compute might be needed to train a model that, when run, could carry out the hardest tasks humans can do. She then estimated when using that much compute would be affordable.

Cotra’s 2022 update on her report’s conclusions estimates that there is a 35% probability of transformative AI by 2036, 50% by 2040, and 60% by 2050 — noting that these guesses are not stable.22

Tom Davidson (also a researcher at Open Philanthropy) wrote a report to complement Cotra’s work. He attempted to figure out when we might expect to see transformative AI based only on looking at various types of research that transformative AI might be like (e.g. developing technology that’s the ultimate goal of a STEM field, or proving difficult mathematical conjectures), and how long it’s taken for each of these kinds of research to be completed in the past, given some quantity of research funding and effort.

Davidson’s report estimates that, solely on this information, you’d think that there was an 8% chance of transformative AI by 2036, 13% by 2060, and 20% by 2100. However, Davidson doesn’t consider the actual ways in which AI has progressed since research started in the 1950s, and notes that it seems likely that the amount of effort we put into AI research will increase as AI becomes increasingly relevant to our economy. As a result, Davidson expects these numbers to be underestimates.

Tuesday, September 13, 2022

First synthetic embryos: the scientific breakthrough raises serious ethical questions

Savulescu, J., Gyngell, C., & Sawai, T.
The Conversation
Originally posted 11 AUG 22

Here is an excerpt:

Artificial wombs

In the latest study, the scientists started with collections of stem cells. The conditions created by the external uterus triggered the developmental process that makes a fetus. Although the scientists said we are a long way off synthetic human embryos, the experiment brings us closer to a future where some humans gestate their babies artificially.

Each year over 300,000 women worldwide die in childbirth or as a result of pregnancy complications, many because they lack basic care. Even in wealthy countries, pregnancy and childbirth is risky and healthcare providers are criticised for failing mothers.

There is an urgent need to make healthcare more accessible across the planet, provide better mental health support for mothers and make pregnancy and childbirth safer. In an ideal world every parent should expect excellent care in all aspects of motherhood. This technology could help treat premature babies and give at least some women a different option: a choice of whether to carry their child or use an external uterus.

Some philosophers say there is a moral imperative to develop artificial wombs to help remedy the unfairness of parenting roles. But other researchers say artificial wombs would threaten a women’s legal right to terminate a pregnancy.

Synthetic embryos and organs

In the last few years, scientists have learned more about how to coax stem cells to develop into increasingly sophisticated structures, including ones that mimic the structure and function of human organs (organoids). Artificial human kidneys, brains, hearts and more have all been created in a lab, though they are still too rudimentary for medical use.

The issue of whether there are moral differences between using stem cells to produce models of human organs for research and using stem cells to create a synthetic embryo are already playing out in law courts.

One of the key differences between organoids and synthetic embryos is their potential. If a synthetic embryo can develop into a living creature, it should have more protection than those which don’t.

Synthetic embryos do not currently have potential to actually create a living mouse. If scientists did make human synthetic embryos, but without the potential to form a living being, they should arguably be treated similarly to organoids.

Saturday, September 3, 2022

‘The entire protein universe’: AI predicts shape of nearly every known protein

Ewen Callaway
Nature (608)
Posted with correction 29 July 22

From today, determining the 3D shape of almost any protein known to science will be as simple as typing in a Google search.

Researchers have used AlphaFold — the revolutionary artificial-intelligence (AI) network — to predict the structures of more than 200 million proteins from some 1 million species, covering almost every known protein on the planet.

The data dump is freely available on a database set up by DeepMind, the London-based AI company, owned by Google, that developed AlphaFold, and the European Molecular Biology Laboratory’s European Bioinformatics Institute (EMBL–EBI), an intergovernmental organization near Cambridge, UK.

“Essentially you can think of it covering the entire protein universe,” DeepMind chief executive Demis Hassabis said at a press briefing. “We’re at the beginning of a new era of digital biology.”

The 3D shape, or structure, of a protein is what determines its function in cells. Most drugs are designed using structural information, and the creation of accurate maps of proteins’ amino-acid arrangement is often the first step to making discoveries about how proteins work.

DeepMind developed the AlphaFold network using an AI technique called deep learning, and the AlphaFold database was launched a year ago with more than 350,000 structure predictions covering nearly every protein made by humans, mice and 19 other widely studied organisms. The catalogue has since swelled to around 1 million entries.

“We’re bracing ourselves for the release of this huge trove,” says Christine Orengo, a computational biologist at University College London, who has used the AlphaFold database to identify new families of proteins. “Having all the data predicted for us is just fantastic.”

(cut)

But such entries tend to be skewed toward human, mouse and other mammalian proteins, Porta says. It’s likely that the AlphaFold dump will add significant knowledge, because it includes such a diverse range of organisms. “It’s going to be an awesome resource. And I’m probably going to download it as soon as it comes out,” says Porta.

Sunday, August 28, 2022

Dr. Oz Shouldn’t Be a Senator—or a Doctor

Timothy Caulfield
Scientific American
Originally posted 15 DEC 21

While holding a medical license, Mehmet Oz, widely known as Dr. Oz, has long pushed misleading, science-free and unproven alternative therapies such as homeopathy, as well as fad diets, detoxes and cleanses. Some of these things have been potentially harmful, including hydroxychloroquine, which he once touted would be beneficial in the treatment or prevention of COVID. This assertion has been thoroughly debunked.

He’s built a tremendous following around his lucrative but evidence-free advice. So, are we surprised that Oz is running as a Republican for the U.S. Senate in Pennsylvania? No, we are not. Misinformation-spouting celebrities seem to be a GOP favorite. This move is very on brand for both Oz and the Republican Party.

His candidacy is a reminder that tolerating and/or enabling celebrity pseudoscience (I’m thinking of you, Oprah Winfrey!) can have serious and enduring consequences. Much of Oz’s advice was bunk before the pandemic, it is bunk now, and there is no reason to assume it won’t be bunk after—even if he becomes Senator Oz. Indeed, as Senator Oz, it’s all but guaranteed he would bring pseudoscience to the table when crafting and voting on legislation that affects the health and welfare of Americans.

As viewed by someone who researches the spread of health misinformation, Oz’s candidacy remains deeply grating in that “of course he is” kind of way. But it is also an opportunity to highlight several realities about pseudoscience, celebrity physicians and the current regulatory environment that allows people like him to continue to call themselves doctor.

Before the pandemic I often heard people argue that the wellness woo coming from celebrities like Gwyneth Paltrow, Tom Brady and Oz was mostly harmless noise. If people want to waste their money on ridiculous vagina eggs, bogus diets or unproven alternative remedies, why should we care? Buyer beware, a fool and their money, a sucker is born every minute, etc., etc.

But we know, now more than ever, that pop culture can—for better or worse—have a significant impact on health beliefs and behaviors. Indeed, one need only consider the degree to which Jenny McCarthy gave life to the vile claim that autism is linked to vaccination. Celebrity figures like podcast host Joe Rogan and football player Aaron Rodgers have greatly added to the chaotic information regarding COVID-19 by magnifying unsupported claims.

Wednesday, July 27, 2022

Blots on a Field? (A modern story of unethical research related to Alzheimer's)

Charles Pillar
Science Magazine
Originally posted 21 JUL 22

Here is an excerpt:

A 6-month investigation by Science provided strong support for Schrag’s suspicions and raised questions about Lesné’s research. A leading independent image analyst and several top Alzheimer’s researchers—including George Perry of the University of Texas, San Antonio, and John Forsayeth of the University of California, San Francisco (UCSF)—reviewed most of Schrag’s findings at Science’s request. They concurred with his overall conclusions, which cast doubt on hundreds of images, including more than 70 in Lesné’s papers. Some look like “shockingly blatant” examples of image tampering, says Donna Wilcock, an Alzheimer’s expert at the University of Kentucky.

The authors “appeared to have composed figures by piecing together parts of photos from different experiments,” says Elisabeth Bik, a molecular biologist and well-known forensic image consultant. “The obtained experimental results might not have been the desired results, and that data might have been changed to … better fit a hypothesis.”

Early this year, Schrag raised his doubts with NIH and journals including Nature; two, including Nature last week, have published expressions of concern about papers by Lesné. Schrag’s work, done independently of Vanderbilt and its medical center, implies millions of federal dollars may have been misspent on the research—and much more on related efforts. Some Alzheimer’s experts now suspect Lesné’s studies have misdirected Alzheimer’s research for 16 years.

“The immediate, obvious damage is wasted NIH funding and wasted thinking in the field because people are using these results as a starting point for their own experiments,” says Stanford University neuroscientist Thomas Südhof, a Nobel laureate and expert on Alzheimer’s and related conditions.

Lesné did not respond to requests for comment. A UMN spokesperson says the university is reviewing complaints about his work.

To Schrag, the two disputed threads of Aβ research raise far-reaching questions about scientific integrity in the struggle to understand and cure Alzheimer’s. Some adherents of the amyloid hypothesis are too uncritical of work that seems to support it, he says. “Even if misconduct is rare, false ideas inserted into key nodes in our body of scientific knowledge can warp our understanding.”

(cut)

The paper provided an “important boost” to the amyloid and toxic oligomer hypotheses when they faced rising doubts, Südhof says. “Proponents loved it, because it seemed to be an independent validation of what they have been proposing for a long time.”

“That was a really big finding that kind of turned the field on its head,” partly because of Ashe’s impeccable imprimatur, Wilcock says. “It drove a lot of other investigators to … go looking for these [heavier] oligomer species.”

As Ashe’s star burned more brightly, Lesné’s rose. He joined UMN with his own NIH-funded lab in 2009. Aβ*56 remained a primary research focus. Megan Larson, who worked as a junior scientist for Lesné and is now a product manager at Bio-Techne, a biosciences supply company, calls him passionate, hardworking, and charismatic. She and others in the lab often ran experiments and produced Western blots, Larson says, but in their papers together, Lesné prepared all the images for publication.

Tuesday, March 29, 2022

Gene editing gets safer thanks to redesigned Cas9 protein

Science Daily
Originally posted 2 MAR 22

Summary:

Scientists have redesigned a key component of a widely used CRISPR-based gene-editing tool, called Cas9, to be thousands of times less likely to target the wrong stretch of DNA while remaining just as efficient as the original version, making it potentially much safer.

-----------------

Scientists have redesigned a key component of a widely used CRISPR-based gene-editing tool, called Cas9, to be thousands of times less likely to target the wrong stretch of DNA while remaining just as efficient as the original version, making it potentially much safer.

One of the grand challenges with using CRISPR-based gene editing on humans is that the molecular machinery sometimes makes changes to the wrong section of a host's genome, creating the possibility that an attempt to repair a genetic mutation in one spot in the genome could accidentally create a dangerous new mutation in another.

But now, scientists at The University of Texas at Austin have redesigned a key component of a widely used CRISPR-based gene-editing tool, called Cas9, to be thousands of times less likely to target the wrong stretch of DNA while remaining just as efficient as the original version, making it potentially much safer. The work is described in a paper published today in the journal Nature.

"This really could be a game changer in terms of a wider application of the CRISPR Cas systems in gene editing," said Kenneth Johnson, a professor of molecular biosciences and co-senior author of the study with David Taylor, an assistant professor of molecular biosciences. The paper's co-first authors are postdoctoral fellows Jack Bravo and Mu-Sen Liu.


Journal Reference:

Jack P. K. Bravo, Mu-Sen Liu, et al. Structural basis for mismatch surveillance by CRISPR–Cas9. Nature, 2022; DOI: 10.1038/s41586-022-04470-1

Wednesday, October 27, 2021

Reflective Reasoning & Philosophy

Nick Byrd
Philosophy Compass
First published: 29 September 2021

Abstract

Philosophy is a reflective activity. So perhaps it is unsurprising that many philosophers have claimed that reflection plays an important role in shaping and even improving our philosophical thinking. This hypothesis seems plausible given that training in philosophy has correlated with better performance on tests of reflection and reflective test performance has correlated with demonstrably better judgments in a variety of domains. This article reviews the hypothesized roles of reflection in philosophical thinking as well as the empirical evidence for these roles. This reveals that although there are reliable links between reflection and philosophical judgment among both laypeople and philosophers, the role of reflection in philosophical thinking may nonetheless depend in part on other factors, some of which have yet to be determined. So progress in research on reflection in philosophy may require further innovation in experimental methods and psychometric validation of philosophical measures.

From the Conclusion

Reflective reasoning is central to both philosophy and the cognitive science thereof. The theoretical and empirical research about reflection and its relation to philosophical thinking is voluminous. The existing findings provide preliminary evidence that reflective reasoning may be related to tendencies for certain philosophical judgments and beliefs over others. However, there are some signs that there is more to the story about reflection’s role in philosophical thinking than our current evidence can reveal. Scholars will need to continue developing new hypotheses, methods, and interpretations to reveal these hitherto latent details.

The recommendations in this article are by no means exhaustive. For instance, in addition to better experimental manipulations and measures of reflection (Byrd, 2021b), philosophers and cognitive scientists will also need to validate their measures of philosophical thinking to ensure that subtle differences in wording of thought experiments do not influence people’s judgments in unexpected ways (Cullen, 2010). After all, philosophical judgments can vary significantly depending on slight differences in wording even when reflection is not manipulated (e.g., Nahmias, Coates, & Kvaran, 2007). Scholars may also need to develop ways to empirically dissociate previously conflated philosophical judgments (Conway & Gawronski, 2013) in order to prevent and clarify misleading results (Byrd & Conway, 2019; Conway, GoldsteinGreenwood, Polacek, & Greene, 2018).

Sunday, August 22, 2021

America’s long history of anti-science has dangerously undermined the COVID vaccine

Peter Hotez
The Dallas Morning News
Originally published 15 Aug 21

Here is an excerpt:

America’s full-throated enthusiasm for vaccines lasted until the early 2000s. The 1998 Lancet publication of a paper from Andrew Wakefield and his colleagues, which wrongly asserted that the measles virus in the MMR vaccine replicated in the colons of children to cause pervasive developmental disorder (autism), ushered in a new era of distrust for vaccine.

It also resulted in distrust for the U.S. Health and Human Services agencies promoting vaccinations. The early response from the Centers for Disease Control and Prevention was to dismiss growing American discontent for vaccines as a fringe element, until eventually in the 2010s anti-vaccine sentiment spread across the internet.

The anti-vaccine movement eventually adopted medical freedom and used it to gain strength and accelerate in size, internet presence and external funding. Rising out of the American West, anti-vaccine proponents insisted that only parents could make vaccine choices and they were prepared to resist government requirements for school entry or attendance.

In California, the notion of vaccine choice gained strength in the 2010s, leading to widespread philosophical exemptions to childhood MMR vaccines and other immunizations. Vaccine exemptions reached critical mass, ultimately culminating in a 2014–2015 measles epidemic in Orange County.

The outbreak prompted state government intervention through the introduction of California Senate Bill 277 that eliminated these exemptions and prevented further epidemics, but it also triggered aggressive opposition. Anti-vaccine health freedom groups harassed members of the Legislature and labeled prominent scientists as pharma shills. They touted pseudoscience, claiming that vaccines were toxic, or that natural immunity acquired from the illness was superior and more durable than vaccine-induced immunity.

Health freedom then expanded through newly established anti-vaccine political action committees in Texas and Oklahoma in the Southwest, Oregon in the Pacific Northwest, and Michigan and Ohio in the Midwest, while additional anti-vaccine organizations formed in almost every state.

These groups lobbied state legislatures to promote or protect vaccine exemptions, while working to cloak or obscure classroom or schoolwide disclosures of vaccine exemptions. They also introduced menacing consent forms to portray vaccines as harmful or toxic.

The Texans for Vaccine Choice PAC formed in 2015, helping to accelerate personal belief immunization exemptions to a point where today approximately 72,000 Texas schoolchildren miss vaccines required for school entry and attendance.

Friday, August 20, 2021

Would I Give Aducanumab to My Mother?

Dena S. Davis
The Hastings Center
Originally published 11 June 21

Here is an excerpt:

First, it is not at all clear that the drug works, in terms of affecting cognition and slowing decline. As Jason Karlawish explains in an incisive piece in STAT, crucial scientific steps were missed, and the current data are inconclusive and contradictory. 

Side effects include possible brain swelling and bleeds (which appear to be severe in about 6% of patients), headache, falls, diarrhea, and what Biogen describes as “confusion/delirium/altered mental status/disorientation.”  Wait a minute!  I thought the reason to take this drug was that one already had altered mental status and confusion.

Before someone is even considered eligible for aducanumab, they must take a PET scan to ascertain that they have elevated levels of amyloid and then an MRI to make sure they don’t already have brain swelling. MRIs have to be repeated regularly while people are on the drug. I know perfectly competent adults who are freaked out by MRI’s.  How do you explain this to someone with dementia?  Or do you sedate them, thus adding to the risk? Furthermore, the drug itself is not a pill, but a monthly infusion. 

Put that all together, and it just doesn’t add up. How would my mother’s life change for the better? There is little evidence of the drug’s efficacy. Meanwhile, her peaceful life in her rural home with her dedicated caregiver would now be punctuated by trips to the hospital for MRI’s, and monthly struggles to start infusions in her 90-year-old body, with its tiny veins and paper-thin skin. Aducanumab is apparently best suited to people in the early stages of Alzheimer’s, but even in the earliest stage my mother refused to accept that she had a problem. I cannot imagine successfully explaining that we were taking these measures in the faint hope of combatting a problem she insisted she didn’t have. And in the absence of an explanation she could understand, surely the frequent hospital trips would feel to her like unpleasant, even scary, invasions.


Saturday, August 14, 2021

How does COVID affect the brain? Two neuroscientists explain

T. Kilpatrick & S. Petrou
The Conversation
Originally posted 11 Aug 21

Here is an excerpt:

In a UK-based study released as a pre-print online in June, researchers compared brain images taken of people before and after exposure to COVID. They showed parts of the limbic system had decreased in size compared to people not infected. This could signal a future vulnerability to brain diseases and may play a role in the emergence of long-COVID symptoms.

COVID could also indirectly affect the brain. The virus can damage blood vessels and cause either bleeding or blockages resulting in the disruption of blood, oxygen, or nutrient supply to the brain, particularly to areas responsible for problem solving.

The virus also activates the immune system, and in some people, this triggers the production of toxic molecules which can reduce brain function.

Although research on this is still emerging, the effects of COVID on nerves that control gut function should also be considered. This may impact digestion and the health and composition of gut bacteria, which are known to influence the function of the brain.

The virus could also compromise the function of the pituitary gland. The pituitary gland, often known as the “master gland”, regulates hormone production. This includes cortisol, which governs our response to stress. When cortisol is deficient, this may contribute to long-term fatigue.

Sunday, August 8, 2021

Spreading False Vax Info Might Cost You Your Medical License

Ryan Basen
Medpagetoday.com
Originally posted 3 Aug 21

Physicians who intentionally spread misinformation or disinformation about the COVID-19 vaccines could be disciplined by state medical boards and may have their licenses suspended or taken away, said the Federation of State Medical Boards (FSMB).

Due "to a dramatic increase in the dissemination of COVID-19 vaccine misinformation and disinformation by physicians and other health care professionals on social media platforms, online and in the media," the FSMB, a national nonprofit representing medical boards that license and discipline allopathic and osteopathic physicians, issued the following statement:
Physicians who willfully generate and spread COVID-19 vaccine misinformation or disinformation are risking disciplinary action by state medical boards, including the suspension or revocation of their medical license. Due to their specialized knowledge and training, licensed physicians possess a high degree of public trust and therefore have a powerful platform in society, whether they recognize it or not. They also have an ethical and professional responsibility to practice medicine in the best interests of their patients and must share information that is factual, scientifically grounded and consensus driven for the betterment of public health. Spreading inaccurate COVID-19 vaccine information contradicts that responsibility, threatens to further erode public trust in the medical profession and puts all patients at risk.

The FSMB is aiming to remind physicians that words matter, that they have a platform, and that misinformation and disinformation -- especially within the context of the pandemic -- can cause harm, said president and CEO Humayun Chaudhry, DO. "I hope that physicians and other licensees get the message," he added.

The info is here.

Monday, August 2, 2021

Landmark research integrity survey finds questionable practices are surprisingly common

Jop De Vrieze
Science Magazine
Originally posted 7 Jul 21

More than half of Dutch scientists regularly engage in questionable research practices, such as hiding flaws in their research design or selectively citing literature, according to a new study. And one in 12 admitted to committing a more serious form of research misconduct within the past 3 years: the fabrication or falsification of research results.

This rate of 8% for outright fraud was more than double that reported in previous studies. Organizers of the Dutch National Survey on Research Integrity, the largest of its kind to date, took special precautions to guarantee the anonymity of respondents for these sensitive questions, says Gowri Gopalakrishna, the survey’s leader and an epidemiologist at Amsterdam University Medical Center (AUMC). “That method increases the honesty of the answers,” she says. “So we have good reason to believe that our outcome is closer to reality than that of previous studies.” The survey team published results on 6 July in two preprint articles, which also examine factors that contribute to research misconduct, on MetaArxiv.

When the survey began last year, organizers invited more than 60,000 researchers to take part—those working across all fields of research, both science and the humanities, at some 22 Dutch universities and research centers. However, many institutions refused to cooperate for fear of negative publicity, and responses fell short of expectations: Only about 6800 completed surveys were received. Still, that’s more responses than any previous research integrity survey, and the response rate at the participating universities was 21%—in line with previous surveys.

One of the preprints focuses on the prevalence of misbehavior—cases of fraud as well as a less severe category of “questionable research practices,” such as carelessly assessing the work of colleagues, poorly mentoring junior researchers, or selectively citing scientific literature. The other article focuses on responsible behavior; this includes correcting one’s own published errors, sharing research data, and “preregistering” experiments—posting hypotheses and protocols ahead of time to reduce the bias that can arise when these are released after data collection.