Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, August 12, 2019

Rural hospitals foundering in states that declined Obamacare

Michael Braga, Jennifer F. A. Borresen, Dak Le and Jonathan Riley
GateHouse Media
Originally published July 28, 2019

Here is an excerpt:

While experts agree embracing Obamacare is not a cure-all for rural hospitals and would not have saved many of those that closed, few believe it was wise to turn the money down.

The crisis facing rural America has been raging for decades and the carnage is not expected to end any time soon.

High rates of poverty in rural areas, combined with the loss of jobs, aging populations, lack of health insurance and competition from other struggling institutions will make it difficult for some rural hospitals to survive regardless of what government policies are implemented.

For some, there’s no point in trying. They say the widespread closures are the result of the free market economy doing its job and a continued shakeout would be helpful. But no rural community wants that shakeout to happen in its backyard.

“A hospital closure is a frightening thing for a small town,” said Patti Davis, president of the Oklahoma Hospital Association. “It places lives in jeopardy and has a domino effect on the community. Health care professionals leave, pharmacies can’t stay open, nursing homes have to close and residents are forced to rely on ambulances to take them to the next closest facility in their most vulnerable hours.”

The info is here.

Why it now pays for businesses to put ethics before economics

John Drummond
The National
Originally published July 14, 2019

Here is an excerpt:

All major companies today have an ethics code or a statement of business principles. I know this because at one time my company designed such codes for many FTSE companies. And all of these codes enshrine a commitment to moral standards. And these standards are often higher than those required by law.

When the boards of companies agree to these principles they largely do so because they believe in them – at the time. However, time moves on. People move on. The business changes. Along the way, company people forget.

So how can you tell if a business still believes in its stated principles? Actually, it is very simple. When an ethical problem, such as Mossmorran, happens, look to see who turns up to answer concerns. If it is a public relations man or woman, the company has lost the plot. By contrast, if it is the executive who runs the business, then the company is likely still in close touch with its ethical standards.

Economics and ethics can be seen as a spectrum. Ethics is at one side of the spectrum and economics at the other. Few organisations, or individuals for that matter, can operate on purely ethical lines alone, and few operate on solely economic considerations. Most organisations can be placed somewhere along this spectrum.

So, if a business uses public relations to shield top management from a problem, it occupies a position closer to economics than to ethics. On the other hand, where corporate executives face their critics directly, then the company would be located nearer to ethics.

The info is here.

Sunday, August 11, 2019

Challenges to capture the big five personality traits in non-WEIRD populations

Rachid Laajaj, Karen Macours, and others
Science Advances  10 Jul 2019:
Vol. 5, no. 7
DOI: 10.1126/sciadv.aaw5226

Abstract

Can personality traits be measured and interpreted reliably across the world? While the use of Big Five personality measures is increasingly common across social sciences, their validity outside of western, educated, industrialized, rich, and democratic (WEIRD) populations is unclear. Adopting a comprehensive psychometric approach to analyze 29 face-to-face surveys from 94,751 respondents in 23 low- and middle-income countries, we show that commonly used personality questions generally fail to measure the intended personality traits and show low validity. These findings contrast with the much higher validity of these measures attained in internet surveys of 198,356 self-selected respondents from the same countries. We discuss how systematic response patterns, enumerator interactions, and low education levels can collectively distort personality measures when assessed in large-scale surveys. Our results highlight the risk of misinterpreting Big Five survey data and provide a warning against naïve interpretations of personality traits without evidence of their validity.

The research is here.

Saturday, August 10, 2019

Emotions and beliefs about morality can change one another

Monica Bucciarelli and P.N. Johnson-Laird
Acta Psychologica
Volume 198, July 2019

Abstract

A dual-process theory postulates that belief and emotions about moral assertions can affect one another. The present study corroborated this prediction. Experiments 1, 2 and 3 showed that the pleasantness of a moral assertion – from loathing it to loving it – correlated with how strongly individuals believed it, i.e., its subjective probability. But, despite repeated testing, this relation did not occur for factual assertions. To create the correlation, it sufficed to change factual assertions, such as, “Advanced countries are democracies,” into moral assertions, “Advanced countries should be democracies”. Two further experiments corroborated the two-way causal relations for moral assertions. Experiment 4 showed that recall of pleasant memories about moral assertions increased their believability, and that the recall of unpleasant memories had the opposite effect. Experiment 5 showed that the creation of reasons to believe moral assertions increased the pleasantness of the emotions they evoked, and that the creation of reasons to disbelieve moral assertions had the opposite effect. Hence, emotions can change beliefs about moral assertions; and reasons can change emotions about moral assertions. We discuss the implications of these results for alternative theories of morality.

The research is here.

Here is a portion of the Discussion:

In sum, emotions and beliefs correlate for moral assertions, and a change in one can cause a change in the other. The main theoretical problem is to explain these results. They should hardly surprise Utilitarians. As we mentioned in the Introduction, one interpretation of their views (Jon Baron, p.c.) is that it is tautological to predict that if you believe a moral assertion then you will like it. And this interpretation implies that our experiments are studies in semantics, which corroborate the existence of tautologies depending on the meanings of words (contra to Quine, 1953; cf. Quelhas, Rasga, & Johnson-Laird, 2017). But, the degrees to which participants believed the moral assertions varied from certain to impossible.  An assertion that they rated as probable as not is hardly a tautology, and it tended to occur with an emotional reaction of indifference. The hypothesis of a tautological interpretation cannot explain this aspect of an overall correlation in ratings on scales.

Friday, August 9, 2019

The Human Brain Project Hasn’t Lived Up to Its Promise

Ed Yong
www.theatlantic.com
Originally published July 22, 2019

Here is an excerpt:

Markram explained that, contra his TED Talk, he had never intended for the simulation to do much of anything. He wasn’t out to make an artificial intelligence, or beat a Turing test. Instead, he pitched it as an experimental test bed—a way for scientists to test their hypotheses without having to prod an animal’s head. “That would be incredibly valuable,” Lindsay says, but it’s based on circular logic. A simulation might well allow researchers to test ideas about the brain, but those ideas would already have to be very advanced to pull off the simulation in the first place. “Once neuroscience is ‘finished,’ we should be able to do it, but to have it as an intermediate step along the way seems difficult.”

“It’s not obvious to me what the very large-scale nature of the simulation would accomplish,” adds Anne Churchland from Cold Spring Harbor Laboratory. Her team, for example, simulates networks of neurons to study how brains combine visual and auditory information. “I could implement that with hundreds of thousands of neurons, and it’s not clear what it would buy me if I had 70 billion.”

In a recent paper titled “The Scientific Case for Brain Simulations,” several HBP scientists argued that big simulations “will likely be indispensable for bridging the scales between the neuron and system levels in the brain.” In other words: Scientists can look at the nuts and bolts of how neurons work, and they can study the behavior of entire organisms, but they need simulations to show how the former create the latter. The paper’s authors drew a comparison to weather forecasts, in which an understanding of physics and chemistry at the scale of neighborhoods allows us to accurately predict temperature, rainfall, and wind across the whole globe.

The info is here.

Advice for technologists on promoting AI ethics

Joe McKendrick
www.zdnet.com
Originally posted July 13, 2019

Ethics looms as a vexing issue when it comes to artificial intelligence (AI). Where does AI bias spring from, especially when it's unintentional? Are companies paying enough attention to it as they plunge full-force into AI development and deployment? Are they doing anything about it? Do they even know what to do about it?

Wringing bias and unintended consequences out of AI is making its way into the job descriptions of technology managers and professionals, especially as business leaders turn to them for guidance and judgement. The drive to ethical AI means an increased role for technologists in the business, as described in a study of 1,580 executives and 4,400 consumers from the Capgemini Research Institute. The survey was able to make direct connections between AI ethics and business growth: if consumers sense a company is employing AI ethically, they'll keep coming back; it they sense unethical AI practices, their business is gone.

Competitive pressure is the reason businesses are pushing AI to its limits and risking crossing ethical lines. "The pressure to implement AI is fueling ethical issues," the Capgemini authors, led by Anne-Laure Thieullent, managing director of Capgemini's Artificial Intelligence & Analytics Group, state. "When we asked executives why ethical issues resulting from AI are an increasing problem, the top-ranked reason was the pressure to implement AI." Thirty-four percent cited this pressure to stay ahead with AI trends.

The info is here.

Thursday, August 8, 2019

Microsoft wants to build artificial general intelligence: an AI better than humans at everything

A humanoid robot stands in front of a screen displaying the letters “AI.”Kelsey Piper 
www.vox.com
Originally published July 22, 2019

Here is an excerpt:

Existing AI systems beat humans at lots of narrow tasks — chess, Go, Starcraft, image generation — and they’re catching up to humans at others, like translation and news reporting. But an artificial general intelligence would be one system with the capacity to surpass us at all of those things. Enthusiasts argue that it would enable centuries of technological advances to arrive, effectively, all at once — transforming medicine, food production, green technologies, and everything else in sight.

Others warn that, if poorly designed, it could be a catastrophe for humans in a few different ways. A sufficiently advanced AI could pursue a goal that we hadn’t intended — a recipe for catastrophe. It could turn out unexpectedly impossible to correct once running. Or it could be maliciously used by a small group of people to harm others. Or it could just make the rich richer and leave the rest of humanity even further in the dust.

Getting AGI right may be one of the most important challenges ahead for humanity. Microsoft’s billion dollar investment has the potential to push the frontiers forward for AI development, but to get AGI right, investors have to be willing to prioritize safety concerns that might slow commercial development.

The info is here.

Not all who ponder count costs: Arithmetic reflection predicts utilitarian tendencies, but logical reflection predicts both deontological and utilitarian tendencies

NickByrdPaulConway
Cognition
https://doi.org/10.1016/j.cognition.2019.06.007

Abstract

Conventional sacrificial moral dilemmas propose directly causing some harm to prevent greater harm. Theory suggests that accepting such actions (consistent with utilitarian philosophy) involves more reflective reasoning than rejecting such actions (consistent with deontological philosophy). However, past findings do not always replicate, confound different kinds of reflection, and employ conventional sacrificial dilemmas that treat utilitarian and deontological considerations as opposite. In two studies, we examined whether past findings would replicate when employing process dissociation to assess deontological and utilitarian inclinations independently. Findings suggested two categorically different impacts of reflection: measures of arithmetic reflection, such as the Cognitive Reflection Test, predicted only utilitarian, not deontological, response tendencies. However, measures of logical reflection, such as performance on logical syllogisms, positively predicted both utilitarian and deontological tendencies. These studies replicate some findings, clarify others, and reveal opportunity for additional nuance in dual process theorist’s claims about the link between reflection and dilemma judgments.

A copy of the paper is here.

Wednesday, August 7, 2019

First do no harm: the impossible oath

Kamran Abbasi
BMJ 2019; 366
doi: https://doi.org/10.1136/bmj.l4734

Here is the beginning:

Discussions about patient safety describe healthcare as an industry. If that’s the case then what is healthcare’s business? What does it manufacture? Health and wellbeing? Possibly. But we know for certain that healthcare manufactures harm. Look at the data from our new research paper on the prevalence, severity, and nature of preventable harm (doi:10.1136/bmj.l4185). Maria Panagioti and colleagues find that the prevalence of overall harm, preventable and non-preventable, is 12% across medical care settings. Around half of this is preventable.

These data make something of a mockery of our principal professional oath to first do no harm. Working in clinical practice, we do harm that we cannot prevent or avoid, such as by appropriately prescribing a drug that causes an adverse drug reaction. As our experience, evidence, and knowledge improve, what isn’t preventable today may well be preventable in the future.

The argument, then, isn’t over whether healthcare causes harm but about the exact estimates of harm and how much of it is preventable. The answer that Panagioti and colleagues deliver from their systematic review of the available evidence is the best we have at the moment, though it isn’t perfect. The definitions of preventable harm differ. Existing studies are heterogeneous and focused more on overall rather than preventable harm. The standard method is the retrospective case record review. The need, say the authors, is for better research in all fields and more research on preventable harms in primary care, psychiatry, and developing countries, and among children and older adults.