Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, December 14, 2019

The Dark Psychology of Social Networks

Jonathan Haidt and Tobias Rose-Stockwell
The Atlantic
Originally posted December 2019

Her are two excerpts:

Human beings evolved to gossip, preen, manipulate, and ostracize. We are easily lured into this new gladiatorial circus, even when we know that it can make us cruel and shallow. As the Yale psychologist Molly Crockett has argued, the normal forces that might stop us from joining an outrage mob—such as time to reflect and cool off, or feelings of empathy for a person being humiliated—are attenuated when we can’t see the person’s face, and when we are asked, many times a day, to take a side by publicly “liking” the condemnation.

In other words, social media turns many of our most politically engaged citizens into Madison’s nightmare: arsonists who compete to create the most inflammatory posts and images, which they can distribute across the country in an instant while their public sociometer displays how far their creations have traveled.

(cut)

Twitter also made a key change in 2009, adding the “Retweet” button. Until then, users had to copy and paste older tweets into their status updates, a small obstacle that required a few seconds of thought and attention. The Retweet button essentially enabled the frictionless spread of content. A single click could pass someone else’s tweet on to all of your followers—and let you share in the credit for contagious content. In 2012, Facebook offered its own version of the retweet, the “Share” button, to its fastest-growing audience: smartphone users.

Chris Wetherell was one of the engineers who created the Retweet button for Twitter. He admitted to BuzzFeed earlier this year that he now regrets it. As Wetherell watched the first Twitter mobs use his new tool, he thought to himself: “We might have just handed a 4-year-old a loaded weapon.”

The coup de grĂ¢ce came in 2012 and 2013, when Upworthy and other sites began to capitalize on this new feature set, pioneering the art of testing headlines across dozens of variations to find the version that generated the highest click-through rate. This was the beginning of “You won’t believe …” articles and their ilk, paired with images tested and selected to make us click impulsively. These articles were not usually intended to cause outrage (the founders of Upworthy were more interested in uplift). But the strategy’s success ensured the spread of headline testing, and with it emotional story-packaging, through new and old media alike; outrageous, morally freighted headlines proliferated in the following years.

The info is here.

Friday, December 13, 2019

The Ethical Dilemma at the Heart of Big Tech Companies

Emanuel Moss and Jacob Metcalf
Harvard Business Review
Originally posted 14 Nov 19

Here is an excerpt:

The central challenge ethics owners are grappling with is negotiating between external pressures to respond to ethical crises at the same time that they must be responsive to the internal logics of their companies and the industry. On the one hand, external criticisms push them toward challenging core business practices and priorities. On the other hand, the logics of Silicon Valley, and of business more generally, create pressures to establish or restore predictable processes and outcomes that still serve the bottom line.

We identified three distinct logics that characterize this tension between internal and external pressures:

Meritocracy: Although originally coined as a derisive term in satirical science fiction by British sociologist Michael Young, meritocracy infuses everything in Silicon Valley from hiring practices to policy positions, and retroactively justifies the industry’s power in our lives. As such, ethics is often framed with an eye toward smarter, better, and faster approaches, as if the problems of the tech industry can be addressed through those virtues. Given this, it is not surprising that many within the tech industry position themselves as the actors best suited to address ethical challenges, rather than less technically-inclined stakeholders, including elected officials and advocacy groups. In our interviews, this manifested in relying on engineers to use their personal judgement by “grappling with the hard questions on the ground,” trusting them to discern and to evaluate the ethical stakes of their own products. While there are some rigorous procedures that help designers scan for the consequences of their products, sitting in a room and “thinking hard” about the potential harms of a product in the real world is not the same as thoroughly understanding how someone (whose life is very different than a software engineer) might be affected by things like predictive policing or facial recognition technology, as obvious examples. Ethics owners find themselves being pulled between technical staff that assert generalized competence over many domains and their own knowledge that ethics is a specialized domain that requires deep contextual understanding.

The info is here.

Conference warned of dangers of facial recognition technology

Because of new technologies, “we are all monitored and recorded every minute of every day of our lives”, a conference has heard. Photograph: iStockColm Keena
The Irish Times
Originally posted 13 Nov 19

Here is an excerpt:

The potential of facial recognition technology to be used by oppressive governments and manipulative corporations was such that some observers have called for it to be banned. The suggestion should be taken seriously, Dr Danaher said.

The technology is “like a fingerprint of your face”, is cheap, and “normalises blanket surveillance”. This makes it “perfect” for oppressive governments and for manipulative corporations.

While the EU’s GDPR laws on the use of data applied here, Dr Danaher said Ireland should also introduce domestic law “to save us from the depredations of facial recognition technology”.

As well as facial recognition technology, he also addressed the conference about “deepfake” technology, which allows for the creation of highly convincing fake video content, and algorithms that assess risk, as other technologies that are creating challenges for the law.

In the US, the use of algorithms to predict a person’s likelihood of re-offending has raised significant concerns.

The info is here.

Thursday, December 12, 2019

State Supreme Court upholds decision in Beckley psychiatrist case

Jessica Farrish
The Register-Herald
Originally posted 8 Nov 19

West Virginia Supreme Court of Appeals on Friday upheld a decision by the West Virginia Board of Medicine that imposed disciplinary actions on Dr. Omar Hasan, a Beckley psychiatrist.

The original case was decided in Kanawha County Circuit Court in July 2018 after Hasan appealed a decision by the West Virginia Board of Medicine to discipline him for an improper relationship with a patient. Hasan alleged the board had erred by failing to adopt recommended finding of facts by its own hearing examiner, had improperly considered content of text messages and had misstated various facts in its final order.

Court documents state that Hasan began providing psychiatric medication in 2011 to a female patient. In September 2014, the patient reported to WVBOM that she and Hasan had had an improper relationship that included texts, phone calls, gifts and “sexual encounters on numerous occasions at various locations.”

She said that when Hasan ended the relationship, she tried to kill herself.

WVBOM investigated the patient’s claim and found probable cause to issue disciplinary actions against Hasan for entering a relationship with a patient for sexual satisfaction and for failing to cut off the patient-provider relationship once the texts had become sexual in nature, according to court filings.

Both are in violation of state law.

The info is here.

Donald Hoffman: The Case Against Reality

The Institute of Arts and Ideas
Originally published September 8, 2019


Many scientists believe that natural selection brought our perception of reality into clearer and deeper focus, reasoning that growing more attuned to the outside world gave our ancestors an evolutionary edge. Donald Hoffman, a cognitive scientist at the University of California, Irvine, thinks that just the opposite is true. Because evolution selects for survival, not accuracy, he proposes that our conscious experience masks reality behind millennia of adaptions for ‘fitness payoffs’ – an argument supported by his work running evolutionary game-theory simulations. In this interview recorded at the HowTheLightGetsIn Festival from the Institute of Arts and Ideas in 2019, Hoffman explains why he believes that perception must necessarily hide reality for conscious agents to survive and reproduce. With that view serving as a springboard, the wide-ranging discussion also touches on Hoffman’s consciousness-centric framework for reality, and its potential implications for our everyday lives.

Editor Note: If you work as a mental health professional, this video may be helpful in understanding perceptions, understanding self, and consciousness.

Wednesday, December 11, 2019

Veil-of-ignorance reasoning favors the greater good

Karen Huang, Joshua D. Greene and Max Bazerman
PNAS first published November 12, 2019
https://doi.org/10.1073/pnas.1910125116

Abstract

The “veil of ignorance” is a moral reasoning device designed to promote impartial decision-making by denying decision-makers access to potentially biasing information about who will benefit most or least from the available options. Veil-of-ignorance reasoning was originally applied by philosophers and economists to foundational questions concerning the overall organization of society. Here we apply veil-of-ignorance reasoning in a more focused way to specific moral dilemmas, all of which involve a tension between the greater good and competing moral concerns. Across six experiments (N = 5,785), three pre-registered, we find that veil-of-ignorance reasoning favors the greater good. Participants first engaged in veil-of-ignorance reasoning about a specific dilemma, asking themselves what they would want if they did not know who among those affected they would be. Participants then responded to a more conventional version of the same dilemma with a moral judgment, a policy preference, or an economic choice. Participants who first engaged in veil-of-ignorance reasoning subsequently made more utilitarian choices in response to a classic philosophical dilemma, a medical dilemma, a real donation decision between a more vs. less effective charity, and a policy decision concerning the social dilemma of autonomous vehicles. These effects depend on the impartial thinking induced by veil-of-ignorance reasoning and cannot be explained by a simple anchoring account, probabilistic reasoning, or generic perspective-taking. These studies indicate that veil-of-ignorance reasoning may be a useful tool for decision-makers who wish to make more impartial and/or socially beneficial choices.

Significance

The philosopher John Rawls aimed to identify fair governing principles by imagining people choosing their principles from behind a “veil of ignorance,” without knowing their places in the social order. Across 7 experiments with over 6,000 participants, we show that veil-of-ignorance reasoning leads to choices that favor the greater good. Veil-of-ignorance reasoning makes people more likely to donate to a more effective charity and to favor saving more lives in a bioethical dilemma. It also addresses the social dilemma of autonomous vehicles (AVs), aligning abstract approval of utilitarian AVs (which minimize total harm) with support for a utilitarian AV policy. These studies indicate that veil-of-ignorance reasoning may be used to promote decision making that is more impartial and socially beneficial.

When Assessing Novel Risks, Facts Are Not Enough

Baruch Fischoff
Scientific American
September 2019

Here is an excerpt:

To start off, we wanted to figure out how well the general public understands the risks they face in everyday life. We asked groups of laypeople to estimate the annual death toll from causes such as drowning, emphysema and homicide and then compared their estimates with scientific ones. Based on previous research, we expected that people would make generally accurate predictions but that they would overestimate deaths from causes that get splashy or frequent headlines—murders, tornadoes—and underestimate deaths from “quiet killers,” such as stroke and asthma, that do not make big news as often.

Overall, our predictions fared well. People overestimated highly reported causes of death and underestimated ones that received less attention. Images of terror attacks, for example, might explain why people who watch more television news worry more about terrorism than individuals who rarely watch. But one puzzling result emerged when we probed these beliefs. People who were strongly opposed to nuclear power believed that it had a very low annual death toll. Why, then, would they be against it? The apparent paradox made us wonder if by asking them to predict average annual death tolls, we had defined risk too narrowly. So, in a new set of questions we asked what risk really meant to people. When we did, we found that those opposed to nuclear power thought the technology had a greater potential to cause widespread catastrophes. That pattern held true for other technologies as well.

To find out whether knowing more about a technology changed this pattern, we asked technical experts the same questions. The experts generally agreed with laypeople about nuclear power's death toll for a typical year: low. But when they defined risk themselves, on a broader time frame, they saw less potential for problems. The general public, unlike the experts, emphasized what could happen in a very bad year. The public and the experts were talking past each other and focusing on different parts of reality.

The info is here.

Tuesday, December 10, 2019

Medicare for All: Would It Work? And Who Would Pay?

Ezekiel (Zeke) Emanuel
Podcast - Wharton
Originally posted 12 Nov 19

Here is an excerpt:

“If you want to control costs, there are at least three main areas you have to look at: drug costs, hospital costs to the private sector, and administrative costs,” he said. “All of them are out of whack. All of them are ballooned.”

On drug costs, for example, it is not clear if that would be achieved through negotiations with drug companies or by the government setting a price ceiling, Emanuel said. He suggested a way out: “We should have negotiations informed by value-based pricing,” he said. “How much health benefit does the drug give? The more the health benefit, the higher the price of the drug. But we do need to have caps.”

Emanuel also faulted Warren’s idea to limit payments to hospitals at 110% of Medicare rates as unwise. He suggested 120% of Medicare rates, adding that it would “probably have no real pushback from most of the health policy people, especially if you do have a reduction in administrative costs and a reduction in drug costs.” he said.

Emanuel pointed to a recent Rand Corporation study which showed that on average, private health plans pay more than 240% of Medicare rates for hospital services. “That seems way out of whack,” he said. “There are a lot of hospital monopolies, and consolidation has led to price increases – not quality increases as claimed. We do have to rein in hospital prices.” The big question is how that could be achieved, which may include placing a cap on those prices, he added.

On reining in administrative costs, Emanuel saw hope. He noted that the private sector spends an average of 12% on administrative costs, and he blamed that on insurance companies and employers wanting to design their own employee health plans. He suggested a set of five or 10 standardized plans from which employers could choose, adding that common health plans work well in countries like the Netherlands, Germany and Switzerland. Japan has 1,600 insurance companies, but standardized health plans and a centralized clearinghouse helps keep administrative costs low, he added.

The info is here.

AI Deemed 'Too Dangerous To Release' Makes It Out Into The World

Andrew Griffin
independent.co.uk
Originally posted November 8, 2019

Here is an excerpt:

"Due to our concerns about malicious applications of the technology, we are not releasing the trained model," wrote OpenAI in a February blog post, released when it made the announcement. "As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper."

At that time, the organisation released only a very limited version of the tool, which used 124 million parameters. It has released more complex versions ever since, and has now made the full version available.

The full version is more convincing than the smaller one, but only "marginally". The relatively limited increase in credibility was part of what encouraged the researchers to make it available, they said.

It hopes that the release can partly help the public understand how such a tool could be misused, and help inform discussions among experts about how that danger can be mitigated.

In February, researchers said that there was a variety of ways that malicious people could misuse the programme. The outputted text could be used to create misleading news articles, impersonate other people, automatically create abusive or fake content for social media or to use to spam people with – along with a variety of possible uses that might not even have been imagined yet, they noted.

Such misuses would require the public to become more critical about the text they read online, which could have been generated by artificial intelligence, they said.

The info is here.

Monday, December 9, 2019

The rise of the greedy-brained ape: Book Review

Shilluk tribes people gather in a circle under a large tree for traditional storytellingTim Radford
Nature.com
Originally published 30 Oct 19

Here is an excerpt:

For her hugely enjoyable sprint through human evolutionary history, Vince (erstwhile news editor of this journal) intertwines many threads: language and writing; the command of tools, pursuit of beauty and appetite for trinkets; and the urge to build things, awareness of time and pursuit of reason. She tracks the cultural explosion, triggered by technological discovery, that gathered pace with the first trade in obsidian blades in East Africa at least 320,000 years ago. That has climaxed this century with the capacity to exploit 40% of the planet’s total primary production.

How did we do it? Vince examines, for instance, our access to and use of energy. Other primates must chew for five hours a day to survive. Humans do so for no more than an hour. We are active 16 hours a day, a tranche during which other mammals sleep. We learn by blind variation and selective retention. Vince proposes that our ancestors enhanced that process of learning from each other with the command of fire: it is 10 times more efficient to eat cooked meat than raw, and heat releases 50% of all the carbohydrates in cereals and tubers.

Thus Homo sapiens secured survival and achieved dominance by exploiting extra energy. The roughly 2,000 calories ideally consumed by one human each day generates about 90 watts: enough energy for one incandescent light bulb. At the flick of a switch or turn of a key, the average human now has access to roughly 2,300 watts of energy from the hardware that powers our lives — and the richest have much more.

The book review is here.

Escaping Skinner's Box: AI and the New Era of Techno-Superstition

John Danaher
Philosophical Disquisitions
Originally posted October 10, 2019

Here is an excerpt:

The findings were dispiriting. Green and Chen found that using algorithms did improve the overall accuracy of decision-making across all conditions, but this was not because adding information and explanations enabled the humans to play a more meaningful role in the process. On the contrary, adding more information often made the human interaction with the algorithm worse. When given the opportunity to learn from the real-world outcomes, the humans became overconfident in their own judgments, more biased, and less accurate overall. When given explanations, they could maintain accuracy but only to the extent that they deferred more to the algorithm. In short, the more transparent the system seemed to the worker, the more the workers made them worse or limited their own agency.

It is important not to extrapolate too much from one study, but the findings here are consistent what has been found in other cases of automation in the workplace: humans are often the weak link in the chain. They need to be kept in check. This suggests that if we want to reap the benefits of AI and automation, we may have to create an environment that is much like that of the Skinner box, one in which humans can flap their wings, convinced they are making a difference, but prevented from doing any real damage. This is the enchanted world of techno-superstition: a world in which we adopt odd rituals and habits (explainable AI; fair AI etc) to create an illusion of control.

(cut)

These two problems combine into a third problem: the erosion of the possibility of achievement. One reason why we work is so that we can achieve certain outcomes. But when we lack understanding and control it undermines our sense of achievement. We achieve things when we use our reason to overcome obstacles to problem-solving in the real world. Some people might argue that a human collaborating with an AI system to produce some change in the world is achieving something through the combination of their efforts. But this is only true if the human plays some significant role in the collaboration. If humans cannot meaningfully make a difference to the success of AI or accurately calibrate their behaviour to produce better outcomes in tandem with the AI, then the pathway to achievement is blocked. This seems to be what happens, even when we try to make the systems more transparent.

The info is here.

Sunday, December 8, 2019

What Einstein meant by ‘God does not play dice’

Jim Baggott
aeon.com
Originally published November 21, 2019

Here is an excerpt:

But Einstein’s was a God of philosophy, not religion. When asked many years later whether he believed in God, he replied: ‘I believe in Spinoza’s God, who reveals himself in the lawful harmony of all that exists, but not in a God who concerns himself with the fate and the doings of mankind.’ Baruch Spinoza, a contemporary of Isaac Newton and Gottfried Leibniz, had conceived of God as identical with nature. For this, he was considered a dangerous heretic, and was excommunicated from the Jewish community in Amsterdam.

Einstein’s God is infinitely superior but impersonal and intangible, subtle but not malicious. He is also firmly determinist. As far as Einstein was concerned, God’s ‘lawful harmony’ is established throughout the cosmos by strict adherence to the physical principles of cause and effect. Thus, there is no room in Einstein’s philosophy for free will: ‘Everything is determined, the beginning as well as the end, by forces over which we have no control … we all dance to a mysterious tune, intoned in the distance by an invisible player.’

The special and general theories of relativity provided a radical new way of conceiving of space and time and their active interactions with matter and energy. These theories are entirely consistent with the ‘lawful harmony’ established by Einstein’s God. But the new theory of quantum mechanics, which Einstein had also helped to found in 1905, was telling a different story. Quantum mechanics is about interactions involving matter and radiation, at the scale of atoms and molecules, set against a passive background of space and time.

The info is here.

Saturday, December 7, 2019

Why do so many Americans hate the welfare state?

Elizabeth Anderson in her office at the University of Michigan: ‘There is a profound suspicion of anyone who is poor, and a consequent raising to the highest priority imposing incredibly humiliating, harsh conditions on access to welfare benefits.’ Photograph: © John D and Catherine T MacArthur Foundation – used with permissionJoe Humphries
irishtimes.com
Originally posted October 24, 2019

Interview with Elizabeth Anderson

Here is an excerpt:

Many ethical problems today are presented as matters of individual rather than collective responsibility. Instead of looking at structural injustices, for example, people are told to recycle more to save the environment, or to manage their workload better to avoid exploitation. Where does this bias come from?

“One way to think about it is this is another bizarre legacy of Calvinist thought. It’s really deep in Protestantism that each individual is responsible for their own salvation.

“It’s really an anti-Catholic thing, right? The Catholics have this giant institution that’s going to help people; and Protestantism says, no, no, no, it’s totally you and your conscience, or your faith.

“That individualism – the idea that I’ve got to save myself – got secularised over time. And it is deep, much deeper in America than in Europe – not only because there are way more Catholics in Europe who never bought into this ideology – but also in Europe due to the experience of the two World Wars they realised they are all in the boat together and they better work together or else all is lost.

“America was never under existential threat. So you didn’t have that same sense of the absolute necessity for individual survival that we come together as a nation. I think those experiences are really profound and helped to propel the welfare state across Europe post World War II.”

You’re well known for promoting the idea of relational equality. Tell us a bit about it.

“For a few decades now I’ve been advancing the idea that the fundamental aim of egalitarianism is to establish relations of equality: What are the social relations with the people around us? And that aims to take our focus away from just how much money is in my pocket.

“People do not exist for the sake of money. Wealth exists to enhance your life and not the other way around. We should be focusing on what are we doing to each other in our obsession with maximising profits. How are workers being treated? How are consumers being treated? How is the environment being treated?”

The info is here.

Friday, December 6, 2019

The female problem: how male bias in medical trials ruined women's health

Gabrielle Jackson
The Guardian
Originally posted 13 Nov 19

Here is an excerpt:

The result of this male bias in research extends beyond clinical practice. Of the 10 prescription drugs taken off the market by the US Food and Drug Administration between 1997 and 2000 due to severe adverse effects, eight caused greater health risks in women. A 2018 study found this was a result of “serious male biases in basic, preclinical, and clinical research”.

The campaign had an effect in the US: in 1993, the FDA and the NIH mandated the inclusion of women in clinical trials. Between the 70s and 90s, these organisations and many other national and international regulators had a policy that ruled out women of so-called childbearing potential from early-stage drug trials.

The reasoning went like this: since women are born with all the eggs they will ever produce, they should be excluded from drug trials in case the drug proves toxic and impedes their ability to reproduce in the future.

The result was that all women were excluded from trials, regardless of their age, gender status, sexual orientation or wish or ability to bear children. Men, on the other hand, constantly reproduce their sperm, meaning they represent a reduced risk. It sounds like a sensible policy, except it treats all women like walking wombs and has introduced a huge bias into the health of the human race.

In their 1994 book Outrageous Practices, Leslie Laurence and Beth Weinhouse wrote: “It defies logic for researchers to acknowledge gender difference by claiming women’s hormones can affect study results – for instance, by affecting drug metabolism – but then to ignore these differences, study only men and extrapolate the results to women.”

The info is here.

Ethical research — the long and bumpy road from shirked to shared

Sarah Franklin
Nature.com
Originally posted October 29, 2019

Here is an excerpt:

Beyond bewilderment

Just as the ramifications of the birth of modern biology were hard to delineate in the late nineteenth century, so there is a sense of ethical bewilderment today. The feeling of being overwhelmed is exacerbated by a lack of regulatory infrastructure or adequate policy precedents. Bioethics, once a beacon of principled pathways to policy, is increasingly lost, like Simba, in a sea of thundering wildebeest. Many of the ethical challenges arising from today’s turbocharged research culture involve rapidly evolving fields that are pursued by globally competitive projects and teams, spanning disparate national regulatory systems and cultural norms. The unknown unknowns grow by the day.

The bar for proper scrutiny has not so much been lowered as sawn to pieces: dispersed, area-specific ethical oversight now exists in a range of forms for every acronym from AI (artificial intelligence) to GM organisms. A single, Belmont-style umbrella no longer seems likely, or even feasible. Much basic science is privately funded and therefore secretive. And the mergers between machine learning and biological synthesis raise additional concerns. Instances of enduring and successful international regulation are rare. The stereotype of bureaucratic, box-ticking ethical compliance is no longer fit for purpose in a world of CRISPR twins, synthetic neurons and self-driving cars.

Bioethics evolves, as does any other branch of knowledge. The post-millennial trend has been to become more global, less canonical and more reflexive. The field no longer relies on philosophically derived mandates codified into textbook formulas. Instead, it functions as a dashboard of pragmatic instruments, and is less expert-driven, more interdisciplinary, less multipurpose and more bespoke. In the wake of the ‘turn to dialogue’ in science, bioethics often looks more like public engagement — and vice versa. Policymakers, polling companies and government quangos tasked with organizing ethical consultations on questions such as mitochondrial donation (‘three-parent embryos’, as the media would have it) now perform the evaluations formerly assigned to bioethicists.

The info is here.

Thursday, December 5, 2019

Galileo’s Big Mistake

Galileo's Big MistakePhilip Goff
Scientific American Blog
Originally posted November 7, 2019

Here is an excerpt:

Galileo, as it were, stripped the physical world of its qualities; and after he’d done that, all that remained were the purely quantitative properties of matter—size, shape, location, motion—properties that can be captured in mathematical geometry. In Galileo’s worldview, there is a radical division between the following two things:
  • The physical world with its purely quantitative properties, which is the domain of science,
  • Consciousness, with its qualities, which is outside of the domain of science.
It was this fundamental division that allowed for the possibility of mathematical physics: once the qualities had been removed, all that remained of the physical world could be captured in mathematics. And hence, natural science, for Galileo, was never intended to give us a complete description of reality. The whole project was premised on setting qualitative consciousness outside of the domain of science.

What do these 17th century discussions have to do with the contemporary science of consciousness? It is now broadly agreed that consciousness poses a very serious challenge for contemporary science. Despite rapid progress in our understanding of the brain, we still have no explanation of how complex electrochemical signaling could give rise to a subjective inner world of colors, sounds, smells and tastes.

Although this problem is taken very seriously, many assume that the way to deal with this challenge is simply to continue with our standard methods for investigating the brain. The great success of physical science in explaining more and more of our universe ought to give us confidence, it is thought, that physical science will one day crack the puzzle of consciousness.

The blog post is here.

How Misinformation Spreads--and Why We Trust It

Cailin O'Connor and James Owen Weatherall
Scientific American
Originally posted September 2019

Here is an excerpt:

Many communication theorists and social scientists have tried to understand how false beliefs persist by modeling the spread of ideas as a contagion. Employing mathematical models involves simulating a simplified representation of human social interactions using a computer algorithm and then studying these simulations to learn something about the real world. In a contagion model, ideas are like viruses that go from mind to mind.

You start with a network, which consists of nodes, representing individuals, and edges, which represent social connections.  You seed an idea in one “mind” and see how it spreads under various assumptions about when transmission will occur.

Contagion models are extremely simple but have been used to explain surprising patterns of behavior, such as the epidemic of suicide that reportedly swept through Europe after publication of Goethe's The Sorrows of Young Werther in 1774 or when dozens of U.S. textile workers in 1962 reported suffering from nausea and numbness after being bitten by an imaginary insect. They can also explain how some false beliefs propagate on the Internet.

Before the last U.S. presidential election, an image of a young Donald Trump appeared on Facebook. It included a quote, attributed to a 1998 interview in People magazine, saying that if Trump ever ran for president, it would be as a Republican because the party is made up of “the dumbest group of voters.” Although it is unclear who “patient zero” was, we know that this meme passed rapidly from profile to profile.

The meme's veracity was quickly evaluated and debunked. The fact-checking Web site Snopes reported that the quote was fabricated as early as October 2015. But as with the tomato hornworm, these efforts to disseminate truth did not change how the rumors spread. One copy of the meme alone was shared more than half a million times. As new individuals shared it over the next several years, their false beliefs infected friends who observed the meme, and they, in turn, passed the false belief on to new areas of the network.

This is why many widely shared memes seem to be immune to fact-checking and debunking. Each person who shared the Trump meme simply trusted the friend who had shared it rather than checking for themselves.

Putting the facts out there does not help if no one bothers to look them up. It might seem like the problem here is laziness or gullibility—and thus that the solution is merely more education or better critical thinking skills. But that is not entirely right.

Sometimes false beliefs persist and spread even in communities where everyone works very hard to learn the truth by gathering and sharing evidence. In these cases, the problem is not unthinking trust. It goes far deeper than that.

The info is here.

Wednesday, December 4, 2019

Veterans Must Also Heal From Moral Injury After War

Camillo Mac Bica
truthout.org
Originally published Nov 11, 2019

Here are two excerpts:

Humankind has identified and internalized a set of values and norms through which we define ourselves as persons, structure our world and render our relationship to it — and to other human beings — comprehensible. These values and norms provide the parameters of our being: our moral identity. Consequently, we now have the need and the means to weigh concrete situations to determine acceptable (right) and unacceptable (wrong) behavior.

Whether an individual chooses to act rightly or wrongly, according to or in violation of her moral identity, will affect whether she perceives herself as true to her personal convictions and to others in the moral community who share her values and ideals. As the moral gravity of one’s actions and experiences on the battlefield becomes apparent, a warrior may suffer profound moral confusion and distress at having transgressed her moral foundations, her moral identity.

Guilt is, simply speaking, the awareness of having transgressed one’s moral convictions and the anxiety precipitated by a perceived breakdown of one’s ethical cohesion — one’s integrity — and an alienation from the moral community. Shame is the loss of self-esteem consequent to a failure to live up to personal and communal expectations.

(cut)

Having completed the necessary philosophical and psychological groundwork, veterans can now begin the very difficult task of confronting the experience. That is, of remembering, reassessing and morally reevaluating their responsibility and culpability for their perceived transgressions on the battlefield.

Reassessing their behavior in combat within the parameters of their increased philosophical and psychological awareness, veterans realize that the programming to which they were subjected and the experience of war as a survival situation are causally connected to those specific battlefield incidents and behaviors, theirs and/or others’, that weigh heavily on their consciences — their moral injury. As a consequence, they understand these influences as extenuating circumstances.

Finally, as they morally reevaluate their actions in war, they see these incidents and behaviors in combat not as justifiable, but as understandable, perhaps even excusable, and their culpability mitigated by the fact that those who determined policy, sent them to war, issued the orders, and allowed the war to occur and/or to continue unchallenged must share responsibility for the crimes and horror that inevitably characterize war.

The info is here.

AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense

Image result for AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of DefenseDepartment of Defense
Defense Innovation Board
Published November 2019

Here is an excerpt:

What DoD is Doing to Establish an Ethical AI Culture

DoD’s “enduring mission is to provide combat-credible military forces needed to deter war and protect the security of our nation.” As such, DoD seeks to responsibly integrate and leverage AI across all domains and mission areas, as well as business administration, cybersecurity, decision support, personnel, maintenance and supply, logistics, healthcare, and humanitarian programs. Notably, many AI use cases are non-lethal in nature. From making battery fuel cells more efficient to predicting kidney disease in our veterans to managing fraud in supply chain management, AI has myriad applications throughout the Department.

DoD is mission-oriented, and to complete its mission, it requires access to cutting edge technologies to support its warfighters at home and abroad. These technologies, however, are only one component to fulfilling its mission. To ensure the safety of its personnel, to comply with the Law of War, and to maintain an exquisite professional force, DoD maintains and abides by myriad processes, procedures, rules, and laws to guide its work.  These are buttressed by DoD’s strong commitment to the following values: leadership, professionalism, and technical knowledge through the dedication to duty, integrity, ethics, honor, courage, and loyalty. As DoD utilizes AI in its mission, these values ground, inform,
and sustain the AI Ethics Principles.

As DoD continues to comply with existing policies, processes, and procedures, as well as to
create new opportunities for responsible research and innovation in AI, there are several
cases where DoD is beginning to or already engaging in activities that comport with the
calls from the DoD AI Strategy and the AI Ethics Principles enumerated here.

The document is here.

Tuesday, December 3, 2019

AI Ethics is All About Power

Code of Ethics in TechnologyKhair Johnson
venturebeat.com
Originally published Nov 11, 2109


Here is an excerpt:

“The gap between those who develop and profit from AI and those most likely to suffer the consequences of its negative effects is growing larger, not smaller,” the report reads, citing a lack of government regulation in an AI industry where power is concentrated among a few companies.

Dr. Safiya Noble and Sarah Roberts chronicled the impact of the tech industry’s lack of diversity in a paper UCLA published in August. The coauthors argue that we’re now witnessing the “rise of a digital technocracy” that masks itself as post-racial and merit-driven but is actually a power system that hoards resources and is likely to judge a person’s value based on racial identity, gender, or class.

“American corporations were not able to ‘self-regulate’ and ‘innovate’ an end to racial discrimination — even under federal law. Among modern digital technology elites, myths of meritocracy and intellectual superiority are used as racial and gender signifiers that disproportionately consolidate resources away from people of color, particularly African-Americans, Latinx, and Native Americans,” reads the report. “Investments in meritocratic myths suppress interrogations of racism and discrimination even as the products of digital elites are infused with racial, class, and gender markers.”

Despite talk about how to solve tech’s diversity problem, much of the tech industry has only made incremental progress, and funding for startups with Latinx or black founders still lags behind those for white founders. To address the tech industry’s general lack of progress on diversity and inclusion initiatives, a pair of Data & Society fellows suggested that tech and AI companies embrace racial literacy.

The info is here.

Editor's Note: The article covers a huge swath of information.

A Constructionist Review of Morality and Emotions: No Evidence for Specific Links Between Moral Content and Discrete Emotions

Image result for moral emotionsCameron, C. D., Lindquist, K. A., & Gray K.
Pers Soc Psychol Rev. 
2015 Nov;19(4):371-94.
doi: 10.1177/1088868314566683.

Abstract

Morality and emotions are linked, but what is the nature of their correspondence? Many "whole number" accounts posit specific correspondences between moral content and discrete emotions, such that harm is linked to anger, and purity is linked to disgust. A review of the literature provides little support for these specific morality-emotion links. Moreover, any apparent specificity may arise from global features shared between morality and emotion, such as affect and conceptual content. These findings are consistent with a constructionist perspective of the mind, which argues against a whole number of discrete and domain-specific mental mechanisms underlying morality and emotion. Instead, constructionism emphasizes the flexible combination of basic and domain-general ingredients such as core affect and conceptualization in creating the experience of moral judgments and discrete emotions. The implications of constructionism in moral psychology are discussed, and we propose an experimental framework for rigorously testing morality-emotion links.

Monday, December 2, 2019

A.I. Systems Echo Biases They’re Fed, Putting Scientists on Guard

Cade Metz
The New York Times
Originally published Nov 11, 2019

Here is the conclusion:

“This is hard. You need a lot of time and care,” he said. “We found an obvious bias. But how many others are in there?”

Dr. Bohannon said computer scientists must develop the skills of a biologist. Much as a biologist strives to understand how a cell works, software engineers must find ways of understanding systems like BERT.

In unveiling the new version of its search engine last month, Google executives acknowledged this phenomenon. And they said they tested their systems extensively with an eye toward removing any bias.

Researchers are only beginning to understand the effects of bias in systems like BERT. But as Dr. Munro showed, companies are already slow to notice even obvious bias in their systems. After Dr. Munro pointed out the problem, Amazon corrected it. Google said it was working to fix the issue.

Primer’s chief executive, Sean Gourley, said vetting the behavior of this new technology would become so important, it will spawn a whole new industry, where companies pay specialists to audit their algorithms for all kinds of bias and other unexpected behavior.

“This is probably a billion-dollar industry,” he said.

The whole article is here.

Neuroscientific evidence in the courtroom: a review.

Image result for neuroscience evidence in the courtroom"Aono, D., Yaffe, G. & Kober, H.
Cogn. Research 4, 40 (2019)
doi:10.1186/s41235-019-0179-y

Abstract

The use of neuroscience in the courtroom can be traced back to the early twentieth century. However, the use of neuroscientific evidence in criminal proceedings has increased significantly over the last two decades. This rapid increase has raised questions, among the media as well as the legal and scientific communities, regarding the effects that such evidence could have on legal decision makers. In this article, we first outline the history of neuroscientific evidence in courtrooms and then we provide a review of recent research investigating the effects of neuroscientific evidence on decision-making broadly, and on legal decisions specifically. In the latter case, we review studies that measure the effect of neuroscientific evidence (both imaging and nonimaging) on verdicts, sentencing recommendations, and beliefs of mock jurors and judges presented with a criminal case. Overall, the reviewed studies suggest mitigating effects of neuroscientific evidence on some legal decisions (e.g., the death penalty). Furthermore, factors such as mental disorder diagnoses and perceived dangerousness might moderate the mitigating effect of such evidence. Importantly, neuroscientific evidence that includes images of the brain does not appear to have an especially persuasive effect (compared with other neuroscientific evidence that does not include an image). Future directions for research are discussed, with a specific call for studies that vary defendant characteristics, the nature of the crime, and a juror’s perception of the defendant, in order to better understand the roles of moderating factors and cognitive mediators of persuasion.

Significance

The increased use of neuroscientific evidence in criminal proceedings has led some to wonder what effects such evidence has on legal decision makers (e.g., jurors and judges) who may be unfamiliar with neuroscience. There is some concern that legal decision makers may be unduly influenced by testimony and images related to the defendant’s brain. This paper briefly reviews the history of neuroscientific evidence in the courtroom to provide context for its current use. It then reviews the current research examining the influence of neuroscientific evidence on legal decision makers and potential moderators of such effects. Our synthesis of the findings suggests that neuroscientific evidence has some mitigating effects on legal decisions, although neuroimaging-based evidence does not hold any special persuasive power. With this in mind, we provide recommendations for future research in this area. Our review and conclusions have implications for scientists, legal scholars, judges, and jurors, who could all benefit from understanding the influence of neuroscientific evidence on judgments in criminal cases.

Sunday, December 1, 2019

Moral Reasoning and Emotion

Joshua May & Victor Kumar
Published in
The Routledge Handbook of Moral Epistemology,
eds. Karen Jones, Mark Timmons, and
Aaron Zimmerman, Routledge (2018), pp. 139-156.

Abstract:

This chapter discusses contemporary scientific research on the role of reason and emotion in moral judgment. The literature suggests that moral judgment is influenced by both reasoning and emotion separately, but there is also emerging evidence of the interaction between the two. While there are clear implications for the rationalism-sentimentalism debate, we conclude that important questions remain open about how central emotion is to moral judgment. We also suggest ways in which moral philosophy is not only guided by empirical research but continues to guide it.

(cut)

Conclusion

We draw two main conclusions. First, on a fair and plausible characterization of reasoning and emotion, they are both integral to moral judgment. In particular, when our moral beliefs undergo changes over long periods of time, there is ample space for both reasoning and emotion to play an iterative role. Second, it’s difficult to cleave reasoning from emotional processing. When the two affect moral judgment, especially across time, their interplay can make it artificial or fruitless to impose a division, even if a distinction can still be drawn between inference and valence in information processing.

To some degree, our conclusions militate against extreme characterizations of the rationalism-sentimentalism divide. However, the debate is best construed as a question about which psychological process is more fundamental or essential to distinctively moral cognition.  The answer still affects both theoretical and practical problems, such as how to make artificial intelligence capable of moral judgment. At the moment, the more nuanced dispute is difficult to adjudicate, but it may be addressed by further research and theorizing.

The book chapter can be downloaded here.

Saturday, November 30, 2019

Are You a Moral Grandstander?

Image result for moral superiorityScott Barry Kaufman
Scientific American
Originally published October 28, 2019

Here are two excerpts:

Do you strongly agree with the following statements?

  • When I share my moral/political beliefs, I do so to show people who disagree with me that I am better than them.
  • I share my moral/political beliefs to make people who disagree with me feel bad.
  • When I share my moral/political beliefs, I do so in the hopes that people different than me will feel ashamed of their beliefs.

If so, then you may be a card-carrying moral grandstander. Of course it's wonderful to have a social cause that you believe in genuinely, and which you want to share with the world to make it a better place. But moral grandstanding comes from a different place.

(cut)

Nevertheless, since we are such a social species, the human need for social status is very pervasive, and often our attempts at sharing our moral and political beliefs on public social media platforms involve a mix of genuine motives with social status motives. As one team of psychologists put it, yes, you probably are "virtue signaling" (a closely related concept to moral grandstanding), but that doesn't mean that your outrage is necessarily inauthentic. It just means that we often have a subconscious desire to signal our virtue, which when not checked, can spiral out of control and cause us to denigrate or be mean to others in order to satisfy that desire. When the need for status predominates, we may even lose touch with what we truly believe, or even what is actually the truth.

The info is here.

Friday, November 29, 2019

Drivers are blamed more than their automated cars when both make mistakes

Image result for Drivers are blamed more than their automated cars when both make mistakesEdmond Awad and others
Nature Human Behaviour (2019)
Published: 28 October 2019


Abstract

When an automated car harms someone, who is blamed by those who hear about it? Here we asked human participants to consider hypothetical cases in which a pedestrian was killed by a car operated under shared control of a primary and a secondary driver and to indicate how blame should be allocated. We find that when only one driver makes an error, that driver is blamed more regardless of whether that driver is a machine or a human. However, when both drivers make errors in cases of human–machine shared-control vehicles, the blame attributed to the machine is reduced. This finding portends a public under-reaction to the malfunctioning artificial intelligence components of automated cars and therefore has a direct policy implication: allowing the de facto standards for shared-control vehicles to be established in courts by the jury system could fail to properly regulate the safety of those vehicles; instead, a top-down scheme (through federal laws) may be called for.

The research is here.