Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, December 9, 2019

The rise of the greedy-brained ape: Book Review

Shilluk tribes people gather in a circle under a large tree for traditional storytellingTim Radford
Nature.com
Originally published 30 Oct 19

Here is an excerpt:

For her hugely enjoyable sprint through human evolutionary history, Vince (erstwhile news editor of this journal) intertwines many threads: language and writing; the command of tools, pursuit of beauty and appetite for trinkets; and the urge to build things, awareness of time and pursuit of reason. She tracks the cultural explosion, triggered by technological discovery, that gathered pace with the first trade in obsidian blades in East Africa at least 320,000 years ago. That has climaxed this century with the capacity to exploit 40% of the planet’s total primary production.

How did we do it? Vince examines, for instance, our access to and use of energy. Other primates must chew for five hours a day to survive. Humans do so for no more than an hour. We are active 16 hours a day, a tranche during which other mammals sleep. We learn by blind variation and selective retention. Vince proposes that our ancestors enhanced that process of learning from each other with the command of fire: it is 10 times more efficient to eat cooked meat than raw, and heat releases 50% of all the carbohydrates in cereals and tubers.

Thus Homo sapiens secured survival and achieved dominance by exploiting extra energy. The roughly 2,000 calories ideally consumed by one human each day generates about 90 watts: enough energy for one incandescent light bulb. At the flick of a switch or turn of a key, the average human now has access to roughly 2,300 watts of energy from the hardware that powers our lives — and the richest have much more.

The book review is here.

Escaping Skinner's Box: AI and the New Era of Techno-Superstition

John Danaher
Philosophical Disquisitions
Originally posted October 10, 2019

Here is an excerpt:

The findings were dispiriting. Green and Chen found that using algorithms did improve the overall accuracy of decision-making across all conditions, but this was not because adding information and explanations enabled the humans to play a more meaningful role in the process. On the contrary, adding more information often made the human interaction with the algorithm worse. When given the opportunity to learn from the real-world outcomes, the humans became overconfident in their own judgments, more biased, and less accurate overall. When given explanations, they could maintain accuracy but only to the extent that they deferred more to the algorithm. In short, the more transparent the system seemed to the worker, the more the workers made them worse or limited their own agency.

It is important not to extrapolate too much from one study, but the findings here are consistent what has been found in other cases of automation in the workplace: humans are often the weak link in the chain. They need to be kept in check. This suggests that if we want to reap the benefits of AI and automation, we may have to create an environment that is much like that of the Skinner box, one in which humans can flap their wings, convinced they are making a difference, but prevented from doing any real damage. This is the enchanted world of techno-superstition: a world in which we adopt odd rituals and habits (explainable AI; fair AI etc) to create an illusion of control.

(cut)

These two problems combine into a third problem: the erosion of the possibility of achievement. One reason why we work is so that we can achieve certain outcomes. But when we lack understanding and control it undermines our sense of achievement. We achieve things when we use our reason to overcome obstacles to problem-solving in the real world. Some people might argue that a human collaborating with an AI system to produce some change in the world is achieving something through the combination of their efforts. But this is only true if the human plays some significant role in the collaboration. If humans cannot meaningfully make a difference to the success of AI or accurately calibrate their behaviour to produce better outcomes in tandem with the AI, then the pathway to achievement is blocked. This seems to be what happens, even when we try to make the systems more transparent.

The info is here.

Sunday, December 8, 2019

What Einstein meant by ‘God does not play dice’

Jim Baggott
aeon.com
Originally published November 21, 2019

Here is an excerpt:

But Einstein’s was a God of philosophy, not religion. When asked many years later whether he believed in God, he replied: ‘I believe in Spinoza’s God, who reveals himself in the lawful harmony of all that exists, but not in a God who concerns himself with the fate and the doings of mankind.’ Baruch Spinoza, a contemporary of Isaac Newton and Gottfried Leibniz, had conceived of God as identical with nature. For this, he was considered a dangerous heretic, and was excommunicated from the Jewish community in Amsterdam.

Einstein’s God is infinitely superior but impersonal and intangible, subtle but not malicious. He is also firmly determinist. As far as Einstein was concerned, God’s ‘lawful harmony’ is established throughout the cosmos by strict adherence to the physical principles of cause and effect. Thus, there is no room in Einstein’s philosophy for free will: ‘Everything is determined, the beginning as well as the end, by forces over which we have no control … we all dance to a mysterious tune, intoned in the distance by an invisible player.’

The special and general theories of relativity provided a radical new way of conceiving of space and time and their active interactions with matter and energy. These theories are entirely consistent with the ‘lawful harmony’ established by Einstein’s God. But the new theory of quantum mechanics, which Einstein had also helped to found in 1905, was telling a different story. Quantum mechanics is about interactions involving matter and radiation, at the scale of atoms and molecules, set against a passive background of space and time.

The info is here.

Saturday, December 7, 2019

Why do so many Americans hate the welfare state?

Elizabeth Anderson in her office at the University of Michigan: ‘There is a profound suspicion of anyone who is poor, and a consequent raising to the highest priority imposing incredibly humiliating, harsh conditions on access to welfare benefits.’ Photograph: © John D and Catherine T MacArthur Foundation – used with permissionJoe Humphries
irishtimes.com
Originally posted October 24, 2019

Interview with Elizabeth Anderson

Here is an excerpt:

Many ethical problems today are presented as matters of individual rather than collective responsibility. Instead of looking at structural injustices, for example, people are told to recycle more to save the environment, or to manage their workload better to avoid exploitation. Where does this bias come from?

“One way to think about it is this is another bizarre legacy of Calvinist thought. It’s really deep in Protestantism that each individual is responsible for their own salvation.

“It’s really an anti-Catholic thing, right? The Catholics have this giant institution that’s going to help people; and Protestantism says, no, no, no, it’s totally you and your conscience, or your faith.

“That individualism – the idea that I’ve got to save myself – got secularised over time. And it is deep, much deeper in America than in Europe – not only because there are way more Catholics in Europe who never bought into this ideology – but also in Europe due to the experience of the two World Wars they realised they are all in the boat together and they better work together or else all is lost.

“America was never under existential threat. So you didn’t have that same sense of the absolute necessity for individual survival that we come together as a nation. I think those experiences are really profound and helped to propel the welfare state across Europe post World War II.”

You’re well known for promoting the idea of relational equality. Tell us a bit about it.

“For a few decades now I’ve been advancing the idea that the fundamental aim of egalitarianism is to establish relations of equality: What are the social relations with the people around us? And that aims to take our focus away from just how much money is in my pocket.

“People do not exist for the sake of money. Wealth exists to enhance your life and not the other way around. We should be focusing on what are we doing to each other in our obsession with maximising profits. How are workers being treated? How are consumers being treated? How is the environment being treated?”

The info is here.

Friday, December 6, 2019

The female problem: how male bias in medical trials ruined women's health

Gabrielle Jackson
The Guardian
Originally posted 13 Nov 19

Here is an excerpt:

The result of this male bias in research extends beyond clinical practice. Of the 10 prescription drugs taken off the market by the US Food and Drug Administration between 1997 and 2000 due to severe adverse effects, eight caused greater health risks in women. A 2018 study found this was a result of “serious male biases in basic, preclinical, and clinical research”.

The campaign had an effect in the US: in 1993, the FDA and the NIH mandated the inclusion of women in clinical trials. Between the 70s and 90s, these organisations and many other national and international regulators had a policy that ruled out women of so-called childbearing potential from early-stage drug trials.

The reasoning went like this: since women are born with all the eggs they will ever produce, they should be excluded from drug trials in case the drug proves toxic and impedes their ability to reproduce in the future.

The result was that all women were excluded from trials, regardless of their age, gender status, sexual orientation or wish or ability to bear children. Men, on the other hand, constantly reproduce their sperm, meaning they represent a reduced risk. It sounds like a sensible policy, except it treats all women like walking wombs and has introduced a huge bias into the health of the human race.

In their 1994 book Outrageous Practices, Leslie Laurence and Beth Weinhouse wrote: “It defies logic for researchers to acknowledge gender difference by claiming women’s hormones can affect study results – for instance, by affecting drug metabolism – but then to ignore these differences, study only men and extrapolate the results to women.”

The info is here.

Ethical research — the long and bumpy road from shirked to shared

Sarah Franklin
Nature.com
Originally posted October 29, 2019

Here is an excerpt:

Beyond bewilderment

Just as the ramifications of the birth of modern biology were hard to delineate in the late nineteenth century, so there is a sense of ethical bewilderment today. The feeling of being overwhelmed is exacerbated by a lack of regulatory infrastructure or adequate policy precedents. Bioethics, once a beacon of principled pathways to policy, is increasingly lost, like Simba, in a sea of thundering wildebeest. Many of the ethical challenges arising from today’s turbocharged research culture involve rapidly evolving fields that are pursued by globally competitive projects and teams, spanning disparate national regulatory systems and cultural norms. The unknown unknowns grow by the day.

The bar for proper scrutiny has not so much been lowered as sawn to pieces: dispersed, area-specific ethical oversight now exists in a range of forms for every acronym from AI (artificial intelligence) to GM organisms. A single, Belmont-style umbrella no longer seems likely, or even feasible. Much basic science is privately funded and therefore secretive. And the mergers between machine learning and biological synthesis raise additional concerns. Instances of enduring and successful international regulation are rare. The stereotype of bureaucratic, box-ticking ethical compliance is no longer fit for purpose in a world of CRISPR twins, synthetic neurons and self-driving cars.

Bioethics evolves, as does any other branch of knowledge. The post-millennial trend has been to become more global, less canonical and more reflexive. The field no longer relies on philosophically derived mandates codified into textbook formulas. Instead, it functions as a dashboard of pragmatic instruments, and is less expert-driven, more interdisciplinary, less multipurpose and more bespoke. In the wake of the ‘turn to dialogue’ in science, bioethics often looks more like public engagement — and vice versa. Policymakers, polling companies and government quangos tasked with organizing ethical consultations on questions such as mitochondrial donation (‘three-parent embryos’, as the media would have it) now perform the evaluations formerly assigned to bioethicists.

The info is here.

Thursday, December 5, 2019

Galileo’s Big Mistake

Galileo's Big MistakePhilip Goff
Scientific American Blog
Originally posted November 7, 2019

Here is an excerpt:

Galileo, as it were, stripped the physical world of its qualities; and after he’d done that, all that remained were the purely quantitative properties of matter—size, shape, location, motion—properties that can be captured in mathematical geometry. In Galileo’s worldview, there is a radical division between the following two things:
  • The physical world with its purely quantitative properties, which is the domain of science,
  • Consciousness, with its qualities, which is outside of the domain of science.
It was this fundamental division that allowed for the possibility of mathematical physics: once the qualities had been removed, all that remained of the physical world could be captured in mathematics. And hence, natural science, for Galileo, was never intended to give us a complete description of reality. The whole project was premised on setting qualitative consciousness outside of the domain of science.

What do these 17th century discussions have to do with the contemporary science of consciousness? It is now broadly agreed that consciousness poses a very serious challenge for contemporary science. Despite rapid progress in our understanding of the brain, we still have no explanation of how complex electrochemical signaling could give rise to a subjective inner world of colors, sounds, smells and tastes.

Although this problem is taken very seriously, many assume that the way to deal with this challenge is simply to continue with our standard methods for investigating the brain. The great success of physical science in explaining more and more of our universe ought to give us confidence, it is thought, that physical science will one day crack the puzzle of consciousness.

The blog post is here.

How Misinformation Spreads--and Why We Trust It

Cailin O'Connor and James Owen Weatherall
Scientific American
Originally posted September 2019

Here is an excerpt:

Many communication theorists and social scientists have tried to understand how false beliefs persist by modeling the spread of ideas as a contagion. Employing mathematical models involves simulating a simplified representation of human social interactions using a computer algorithm and then studying these simulations to learn something about the real world. In a contagion model, ideas are like viruses that go from mind to mind.

You start with a network, which consists of nodes, representing individuals, and edges, which represent social connections.  You seed an idea in one “mind” and see how it spreads under various assumptions about when transmission will occur.

Contagion models are extremely simple but have been used to explain surprising patterns of behavior, such as the epidemic of suicide that reportedly swept through Europe after publication of Goethe's The Sorrows of Young Werther in 1774 or when dozens of U.S. textile workers in 1962 reported suffering from nausea and numbness after being bitten by an imaginary insect. They can also explain how some false beliefs propagate on the Internet.

Before the last U.S. presidential election, an image of a young Donald Trump appeared on Facebook. It included a quote, attributed to a 1998 interview in People magazine, saying that if Trump ever ran for president, it would be as a Republican because the party is made up of “the dumbest group of voters.” Although it is unclear who “patient zero” was, we know that this meme passed rapidly from profile to profile.

The meme's veracity was quickly evaluated and debunked. The fact-checking Web site Snopes reported that the quote was fabricated as early as October 2015. But as with the tomato hornworm, these efforts to disseminate truth did not change how the rumors spread. One copy of the meme alone was shared more than half a million times. As new individuals shared it over the next several years, their false beliefs infected friends who observed the meme, and they, in turn, passed the false belief on to new areas of the network.

This is why many widely shared memes seem to be immune to fact-checking and debunking. Each person who shared the Trump meme simply trusted the friend who had shared it rather than checking for themselves.

Putting the facts out there does not help if no one bothers to look them up. It might seem like the problem here is laziness or gullibility—and thus that the solution is merely more education or better critical thinking skills. But that is not entirely right.

Sometimes false beliefs persist and spread even in communities where everyone works very hard to learn the truth by gathering and sharing evidence. In these cases, the problem is not unthinking trust. It goes far deeper than that.

The info is here.

Wednesday, December 4, 2019

Veterans Must Also Heal From Moral Injury After War

Camillo Mac Bica
truthout.org
Originally published Nov 11, 2019

Here are two excerpts:

Humankind has identified and internalized a set of values and norms through which we define ourselves as persons, structure our world and render our relationship to it — and to other human beings — comprehensible. These values and norms provide the parameters of our being: our moral identity. Consequently, we now have the need and the means to weigh concrete situations to determine acceptable (right) and unacceptable (wrong) behavior.

Whether an individual chooses to act rightly or wrongly, according to or in violation of her moral identity, will affect whether she perceives herself as true to her personal convictions and to others in the moral community who share her values and ideals. As the moral gravity of one’s actions and experiences on the battlefield becomes apparent, a warrior may suffer profound moral confusion and distress at having transgressed her moral foundations, her moral identity.

Guilt is, simply speaking, the awareness of having transgressed one’s moral convictions and the anxiety precipitated by a perceived breakdown of one’s ethical cohesion — one’s integrity — and an alienation from the moral community. Shame is the loss of self-esteem consequent to a failure to live up to personal and communal expectations.

(cut)

Having completed the necessary philosophical and psychological groundwork, veterans can now begin the very difficult task of confronting the experience. That is, of remembering, reassessing and morally reevaluating their responsibility and culpability for their perceived transgressions on the battlefield.

Reassessing their behavior in combat within the parameters of their increased philosophical and psychological awareness, veterans realize that the programming to which they were subjected and the experience of war as a survival situation are causally connected to those specific battlefield incidents and behaviors, theirs and/or others’, that weigh heavily on their consciences — their moral injury. As a consequence, they understand these influences as extenuating circumstances.

Finally, as they morally reevaluate their actions in war, they see these incidents and behaviors in combat not as justifiable, but as understandable, perhaps even excusable, and their culpability mitigated by the fact that those who determined policy, sent them to war, issued the orders, and allowed the war to occur and/or to continue unchallenged must share responsibility for the crimes and horror that inevitably characterize war.

The info is here.