Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, December 6, 2019

The female problem: how male bias in medical trials ruined women's health

Gabrielle Jackson
The Guardian
Originally posted 13 Nov 19

Here is an excerpt:

The result of this male bias in research extends beyond clinical practice. Of the 10 prescription drugs taken off the market by the US Food and Drug Administration between 1997 and 2000 due to severe adverse effects, eight caused greater health risks in women. A 2018 study found this was a result of “serious male biases in basic, preclinical, and clinical research”.

The campaign had an effect in the US: in 1993, the FDA and the NIH mandated the inclusion of women in clinical trials. Between the 70s and 90s, these organisations and many other national and international regulators had a policy that ruled out women of so-called childbearing potential from early-stage drug trials.

The reasoning went like this: since women are born with all the eggs they will ever produce, they should be excluded from drug trials in case the drug proves toxic and impedes their ability to reproduce in the future.

The result was that all women were excluded from trials, regardless of their age, gender status, sexual orientation or wish or ability to bear children. Men, on the other hand, constantly reproduce their sperm, meaning they represent a reduced risk. It sounds like a sensible policy, except it treats all women like walking wombs and has introduced a huge bias into the health of the human race.

In their 1994 book Outrageous Practices, Leslie Laurence and Beth Weinhouse wrote: “It defies logic for researchers to acknowledge gender difference by claiming women’s hormones can affect study results – for instance, by affecting drug metabolism – but then to ignore these differences, study only men and extrapolate the results to women.”

The info is here.

Ethical research — the long and bumpy road from shirked to shared

Sarah Franklin
Originally posted October 29, 2019

Here is an excerpt:

Beyond bewilderment

Just as the ramifications of the birth of modern biology were hard to delineate in the late nineteenth century, so there is a sense of ethical bewilderment today. The feeling of being overwhelmed is exacerbated by a lack of regulatory infrastructure or adequate policy precedents. Bioethics, once a beacon of principled pathways to policy, is increasingly lost, like Simba, in a sea of thundering wildebeest. Many of the ethical challenges arising from today’s turbocharged research culture involve rapidly evolving fields that are pursued by globally competitive projects and teams, spanning disparate national regulatory systems and cultural norms. The unknown unknowns grow by the day.

The bar for proper scrutiny has not so much been lowered as sawn to pieces: dispersed, area-specific ethical oversight now exists in a range of forms for every acronym from AI (artificial intelligence) to GM organisms. A single, Belmont-style umbrella no longer seems likely, or even feasible. Much basic science is privately funded and therefore secretive. And the mergers between machine learning and biological synthesis raise additional concerns. Instances of enduring and successful international regulation are rare. The stereotype of bureaucratic, box-ticking ethical compliance is no longer fit for purpose in a world of CRISPR twins, synthetic neurons and self-driving cars.

Bioethics evolves, as does any other branch of knowledge. The post-millennial trend has been to become more global, less canonical and more reflexive. The field no longer relies on philosophically derived mandates codified into textbook formulas. Instead, it functions as a dashboard of pragmatic instruments, and is less expert-driven, more interdisciplinary, less multipurpose and more bespoke. In the wake of the ‘turn to dialogue’ in science, bioethics often looks more like public engagement — and vice versa. Policymakers, polling companies and government quangos tasked with organizing ethical consultations on questions such as mitochondrial donation (‘three-parent embryos’, as the media would have it) now perform the evaluations formerly assigned to bioethicists.

The info is here.

Thursday, December 5, 2019

Galileo’s Big Mistake

Galileo's Big MistakePhilip Goff
Scientific American Blog
Originally posted November 7, 2019

Here is an excerpt:

Galileo, as it were, stripped the physical world of its qualities; and after he’d done that, all that remained were the purely quantitative properties of matter—size, shape, location, motion—properties that can be captured in mathematical geometry. In Galileo’s worldview, there is a radical division between the following two things:
  • The physical world with its purely quantitative properties, which is the domain of science,
  • Consciousness, with its qualities, which is outside of the domain of science.
It was this fundamental division that allowed for the possibility of mathematical physics: once the qualities had been removed, all that remained of the physical world could be captured in mathematics. And hence, natural science, for Galileo, was never intended to give us a complete description of reality. The whole project was premised on setting qualitative consciousness outside of the domain of science.

What do these 17th century discussions have to do with the contemporary science of consciousness? It is now broadly agreed that consciousness poses a very serious challenge for contemporary science. Despite rapid progress in our understanding of the brain, we still have no explanation of how complex electrochemical signaling could give rise to a subjective inner world of colors, sounds, smells and tastes.

Although this problem is taken very seriously, many assume that the way to deal with this challenge is simply to continue with our standard methods for investigating the brain. The great success of physical science in explaining more and more of our universe ought to give us confidence, it is thought, that physical science will one day crack the puzzle of consciousness.

The blog post is here.

How Misinformation Spreads--and Why We Trust It

Cailin O'Connor and James Owen Weatherall
Scientific American
Originally posted September 2019

Here is an excerpt:

Many communication theorists and social scientists have tried to understand how false beliefs persist by modeling the spread of ideas as a contagion. Employing mathematical models involves simulating a simplified representation of human social interactions using a computer algorithm and then studying these simulations to learn something about the real world. In a contagion model, ideas are like viruses that go from mind to mind.

You start with a network, which consists of nodes, representing individuals, and edges, which represent social connections.  You seed an idea in one “mind” and see how it spreads under various assumptions about when transmission will occur.

Contagion models are extremely simple but have been used to explain surprising patterns of behavior, such as the epidemic of suicide that reportedly swept through Europe after publication of Goethe's The Sorrows of Young Werther in 1774 or when dozens of U.S. textile workers in 1962 reported suffering from nausea and numbness after being bitten by an imaginary insect. They can also explain how some false beliefs propagate on the Internet.

Before the last U.S. presidential election, an image of a young Donald Trump appeared on Facebook. It included a quote, attributed to a 1998 interview in People magazine, saying that if Trump ever ran for president, it would be as a Republican because the party is made up of “the dumbest group of voters.” Although it is unclear who “patient zero” was, we know that this meme passed rapidly from profile to profile.

The meme's veracity was quickly evaluated and debunked. The fact-checking Web site Snopes reported that the quote was fabricated as early as October 2015. But as with the tomato hornworm, these efforts to disseminate truth did not change how the rumors spread. One copy of the meme alone was shared more than half a million times. As new individuals shared it over the next several years, their false beliefs infected friends who observed the meme, and they, in turn, passed the false belief on to new areas of the network.

This is why many widely shared memes seem to be immune to fact-checking and debunking. Each person who shared the Trump meme simply trusted the friend who had shared it rather than checking for themselves.

Putting the facts out there does not help if no one bothers to look them up. It might seem like the problem here is laziness or gullibility—and thus that the solution is merely more education or better critical thinking skills. But that is not entirely right.

Sometimes false beliefs persist and spread even in communities where everyone works very hard to learn the truth by gathering and sharing evidence. In these cases, the problem is not unthinking trust. It goes far deeper than that.

The info is here.

Wednesday, December 4, 2019

Veterans Must Also Heal From Moral Injury After War

Camillo Mac Bica
Originally published Nov 11, 2019

Here are two excerpts:

Humankind has identified and internalized a set of values and norms through which we define ourselves as persons, structure our world and render our relationship to it — and to other human beings — comprehensible. These values and norms provide the parameters of our being: our moral identity. Consequently, we now have the need and the means to weigh concrete situations to determine acceptable (right) and unacceptable (wrong) behavior.

Whether an individual chooses to act rightly or wrongly, according to or in violation of her moral identity, will affect whether she perceives herself as true to her personal convictions and to others in the moral community who share her values and ideals. As the moral gravity of one’s actions and experiences on the battlefield becomes apparent, a warrior may suffer profound moral confusion and distress at having transgressed her moral foundations, her moral identity.

Guilt is, simply speaking, the awareness of having transgressed one’s moral convictions and the anxiety precipitated by a perceived breakdown of one’s ethical cohesion — one’s integrity — and an alienation from the moral community. Shame is the loss of self-esteem consequent to a failure to live up to personal and communal expectations.


Having completed the necessary philosophical and psychological groundwork, veterans can now begin the very difficult task of confronting the experience. That is, of remembering, reassessing and morally reevaluating their responsibility and culpability for their perceived transgressions on the battlefield.

Reassessing their behavior in combat within the parameters of their increased philosophical and psychological awareness, veterans realize that the programming to which they were subjected and the experience of war as a survival situation are causally connected to those specific battlefield incidents and behaviors, theirs and/or others’, that weigh heavily on their consciences — their moral injury. As a consequence, they understand these influences as extenuating circumstances.

Finally, as they morally reevaluate their actions in war, they see these incidents and behaviors in combat not as justifiable, but as understandable, perhaps even excusable, and their culpability mitigated by the fact that those who determined policy, sent them to war, issued the orders, and allowed the war to occur and/or to continue unchallenged must share responsibility for the crimes and horror that inevitably characterize war.

The info is here.

AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense

Image result for AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of DefenseDepartment of Defense
Defense Innovation Board
Published November 2019

Here is an excerpt:

What DoD is Doing to Establish an Ethical AI Culture

DoD’s “enduring mission is to provide combat-credible military forces needed to deter war and protect the security of our nation.” As such, DoD seeks to responsibly integrate and leverage AI across all domains and mission areas, as well as business administration, cybersecurity, decision support, personnel, maintenance and supply, logistics, healthcare, and humanitarian programs. Notably, many AI use cases are non-lethal in nature. From making battery fuel cells more efficient to predicting kidney disease in our veterans to managing fraud in supply chain management, AI has myriad applications throughout the Department.

DoD is mission-oriented, and to complete its mission, it requires access to cutting edge technologies to support its warfighters at home and abroad. These technologies, however, are only one component to fulfilling its mission. To ensure the safety of its personnel, to comply with the Law of War, and to maintain an exquisite professional force, DoD maintains and abides by myriad processes, procedures, rules, and laws to guide its work.  These are buttressed by DoD’s strong commitment to the following values: leadership, professionalism, and technical knowledge through the dedication to duty, integrity, ethics, honor, courage, and loyalty. As DoD utilizes AI in its mission, these values ground, inform,
and sustain the AI Ethics Principles.

As DoD continues to comply with existing policies, processes, and procedures, as well as to
create new opportunities for responsible research and innovation in AI, there are several
cases where DoD is beginning to or already engaging in activities that comport with the
calls from the DoD AI Strategy and the AI Ethics Principles enumerated here.

The document is here.

Tuesday, December 3, 2019

AI Ethics is All About Power

Code of Ethics in TechnologyKhair Johnson
Originally published Nov 11, 2109

Here is an excerpt:

“The gap between those who develop and profit from AI and those most likely to suffer the consequences of its negative effects is growing larger, not smaller,” the report reads, citing a lack of government regulation in an AI industry where power is concentrated among a few companies.

Dr. Safiya Noble and Sarah Roberts chronicled the impact of the tech industry’s lack of diversity in a paper UCLA published in August. The coauthors argue that we’re now witnessing the “rise of a digital technocracy” that masks itself as post-racial and merit-driven but is actually a power system that hoards resources and is likely to judge a person’s value based on racial identity, gender, or class.

“American corporations were not able to ‘self-regulate’ and ‘innovate’ an end to racial discrimination — even under federal law. Among modern digital technology elites, myths of meritocracy and intellectual superiority are used as racial and gender signifiers that disproportionately consolidate resources away from people of color, particularly African-Americans, Latinx, and Native Americans,” reads the report. “Investments in meritocratic myths suppress interrogations of racism and discrimination even as the products of digital elites are infused with racial, class, and gender markers.”

Despite talk about how to solve tech’s diversity problem, much of the tech industry has only made incremental progress, and funding for startups with Latinx or black founders still lags behind those for white founders. To address the tech industry’s general lack of progress on diversity and inclusion initiatives, a pair of Data & Society fellows suggested that tech and AI companies embrace racial literacy.

The info is here.

Editor's Note: The article covers a huge swath of information.

A Constructionist Review of Morality and Emotions: No Evidence for Specific Links Between Moral Content and Discrete Emotions

Image result for moral emotionsCameron, C. D., Lindquist, K. A., & Gray K.
Pers Soc Psychol Rev. 
2015 Nov;19(4):371-94.
doi: 10.1177/1088868314566683.


Morality and emotions are linked, but what is the nature of their correspondence? Many "whole number" accounts posit specific correspondences between moral content and discrete emotions, such that harm is linked to anger, and purity is linked to disgust. A review of the literature provides little support for these specific morality-emotion links. Moreover, any apparent specificity may arise from global features shared between morality and emotion, such as affect and conceptual content. These findings are consistent with a constructionist perspective of the mind, which argues against a whole number of discrete and domain-specific mental mechanisms underlying morality and emotion. Instead, constructionism emphasizes the flexible combination of basic and domain-general ingredients such as core affect and conceptualization in creating the experience of moral judgments and discrete emotions. The implications of constructionism in moral psychology are discussed, and we propose an experimental framework for rigorously testing morality-emotion links.

Monday, December 2, 2019

A.I. Systems Echo Biases They’re Fed, Putting Scientists on Guard

Cade Metz
The New York Times
Originally published Nov 11, 2019

Here is the conclusion:

“This is hard. You need a lot of time and care,” he said. “We found an obvious bias. But how many others are in there?”

Dr. Bohannon said computer scientists must develop the skills of a biologist. Much as a biologist strives to understand how a cell works, software engineers must find ways of understanding systems like BERT.

In unveiling the new version of its search engine last month, Google executives acknowledged this phenomenon. And they said they tested their systems extensively with an eye toward removing any bias.

Researchers are only beginning to understand the effects of bias in systems like BERT. But as Dr. Munro showed, companies are already slow to notice even obvious bias in their systems. After Dr. Munro pointed out the problem, Amazon corrected it. Google said it was working to fix the issue.

Primer’s chief executive, Sean Gourley, said vetting the behavior of this new technology would become so important, it will spawn a whole new industry, where companies pay specialists to audit their algorithms for all kinds of bias and other unexpected behavior.

“This is probably a billion-dollar industry,” he said.

The whole article is here.