Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, May 31, 2019

The Ethics of Smart Devices That Analyze How We Speak

Trevor Cox
Harvard Business Review
Originally posted May 20, 2019

Here is an excerpt:

But what happens when machines start analyzing how we talk? The big tech firms are coy about exactly what they are planning to detect in our voices and why, but Amazon has a patent that lists a range of traits they might collect, including identity (“gender, age, ethnic origin, etc.”), health (“sore throat, sickness, etc.”), and feelings, (“happy, sad, tired, sleepy, excited, etc.”).

This worries me — and it should worry you, too — because algorithms are imperfect. And voice is particularly difficult to analyze because the signals we give off are inconsistent and ambiguous. What’s more, the inferences that even humans make are distorted by stereotypes. Let’s use the example of trying to identify sexual orientation. There is a style of speaking with raised pitch and swooping intonations which some people assume signals a gay man. But confusion often arises because some heterosexuals speak this way, and many homosexuals don’t. Science experiments show that human aural “gaydar” is only right about 60% of the time. Studies of machines attempting to detect sexual orientation from facial images have shown a success rate of about 70%. Sound impressive? Not to me, because that means those machines are wrong 30% of the time. And I would anticipate success rates to be even lower for voices, because how we speak changes depending on who we’re talking to. Our vocal anatomy is very flexible, which allows us to be oral chameleons, subconsciously changing our voices to fit in better with the person we’re speaking with.

The info is here.

Can Science Explain Morality?

Charles Glenn
National Review
Originally published May 2, 2019

Here is an excerpt:

Useful as these studies can be, however, they leave us with a diminished sense of the moral weight of human personhood. Right and wrong, good and evil, and so forth are human constructs that derive from human evolutionary history, the cognitive architecture of human language, neurochemistry and neuroanatomy, and contingent human interests. Thus the fundamental source of morality is not outside human experience and biology. There are no real rights, duties, or valuable things out in the world. The nature and quality of moral attitudes — thinking, feeling, or believing that something is either moral or immoral — can be explained psychologically and culturally.

That is, good and evil have no anchor in the basic structure and significance of our existence (indeed, existence itself has no significance) but are entirely contingent. This leaves us in a vacuum of purpose, one that we can easily see reflected in the hedonistic confusion of contemporary culture. “Are there really things we should and shouldn’t do beyond what would best serve our interests and preferences?” we might well ask. “Are some things valuable in an objective sense, beyond what we happen to want or care about?”

The information is here.

Thursday, May 30, 2019

Confronting bias in judging: A framework for addressing psychological biases in decision making

Tom Stafford, Jules Holroyd, & Robin Scaife
PsyArXiv
Last edited on December 24, 2018

Abstract

Cognitive biases are systematic tendencies of thought which undermine accurate or fair reasoning. An allied concept is that of ‘implicit bias’, which are biases directed at members of particular social identities which may manifest without individual’s endorsement or awareness. This article reviews the literature on cognitive bias, broadly conceived, and makes proposals for how judges might usefully think about avoiding bias in their decision making. Contra some portrayals of cognitive bias as ‘unconscious’ or unknowable, we contend that things can be known about our psychological biases, and steps taken to address them. We argue for the benefits of a unified treatment of cognitive and implicit biases and propose a “3 by 3” framework which can be used by individuals and institutions to review their practice with respect to addressing bias. We emphasise that addressing bias requires an ongoing commitment to monitoring, evaluation and review rather than one­-off interventions.

The research is here.

How Big Tech is struggling with the ethics of AI

Madhumita Murgia & Siddarth Shrkianth
Financial Times
Originally posted April 28, 2019

Here is an excerpt:

The development and application of AI is causing huge divisions both inside and outside tech companies, and Google is not alone in struggling to find an ethical approach.

The companies that are leading research into AI in the US and China, including Google, Amazon, Microsoft, Baidu, SenseTime and Tencent, have taken very different approaches to AI and to whether to develop technology that can ultimately be used for military and surveillance purposes.

For instance, Google has said it will not sell facial recognition services to governments, while Amazon and Microsoft both do so. They have also been attacked for the algorithmic bias of their programmes, where computers inadvertently propagate bias through unfair or corrupt data inputs.

In response to criticism not only from campaigners and academics but also their own staff, companies have begun to self-regulate by trying to set up their own “AI ethics” initiatives that perform roles ranging from academic research — as in the case of Google-owned DeepMind’s Ethics and Society division — to formulating guidelines and convening external oversight panels.

The info is here.

Wednesday, May 29, 2019

Why Do We Need Wisdom To Lead In The Future?

Sesil Pir
Forbes.com
Originally posted May 19, 2019

Here is an excerpt:

We live in a society that encourages us to think about how to have a great career but leaves us inarticulate about how to cultivate the inner life. The road to success is definitively paved through competition and so fiercely that it becomes all-consuming for many of us. It is commonly accepted today that information is the key source of all being; yet, information alone doesn’t laver one with knowledge as knowledge alone doesn’t lead to righteous action. In the age of artificial information, we need to consider beyond data to drive purposeful progression and authentic illuminations.

Wisdom in the context of leadership refers to our quality of having good, sound judgment. It is a source that provides light into our own insight and introduces a new appreciation for the world around us. It helps us recognize that others are more than our limiting impressions of them. It fills us with confidence that we are connected and better capable than we could ever dream of.

The people with this quality tends to lead from a place of strong internal cohesion. They have overcome fragmentation to reach a level of integration, which supports the way they show up – tranquil, settled and rooted. These people tend to withstand the hard winds of volatility and not easily crumble in the face of adversity. They ground their thoughts, emotions and behaviors in values that feed their self-efficacy and they heartfully understand perfectionism is an unattainable goal.

The info is here.

The Problem with Facebook


Making Sense Podcast

Originally posted on March 27, 2019

In this episode of the Making Sense podcast, Sam Harris speaks with Roger McNamee about his book Zucked: Waking Up to the Facebook Catastrophe.

Roger McNamee has been a Silicon Valley investor for thirty-five years. He has cofounded successful venture funds including Elevation with U2’s Bono. He was a former mentor to Facebook CEO Mark Zuckerberg and helped recruit COO Sheryl Sandberg to the company. He holds a B.A. from Yale University and an M.B.A. from the Tuck School of Business at Dartmouth College.

The podcast is here.

The fundamental ethical problems with social media companies like Facebook and Google start about 20 minutes into the podcast.

Tuesday, May 28, 2019

Should Students Take Smart Drugs?

Darian Meacham
www.philosophersmag.com
Originally posted December 8, 2017

If this were a straightforward question, you would not be reading about it in a philosophy magazine. But you are, so it makes sense that we try to clarify the terms of the discussion before wading in too far. Unfortunately (or fortunately depending on how you look at it), when philosophers set out to de-obfuscate what look to be relatively forthright questions, things usually get more complicated rather than less: each of the operative terms at stake in the question, ‘should students take smart drugs?’ opens us up onto larger debates about the nature of medicine, health, education, learning, and creativity as well as economic, political and social structures and norms. So, in a sense, a seemingly rather narrow question about a relatively peripheral issue in the education sector morphs into a much larger question about how we think about and value learning; what constitutes psychiatric illness and in what ways should we deal with it; and what sort of productivity should educational institutions like universities, but also secondary and even primary schools value and be oriented towards?

The first question that needs to be addressed is what is a ‘smart drug’? I have in mind two things when I use the term here:

(1) On the one hand, existing psychostimulants normally prescribed for children and adults with a variety of conditions, most prominently ADHD (Attention Deficit Hyperactivity Disorder), but also various others like narcolepsy, sleep-work disorder and schizophrenia. Commonly known by brand and generic names like Adderall, Ritalin, and Modafinil, these drugs are often sold off-label or on the grey market for what could be called non-medical or ‘enhancement’ purposes. The off-label use of psychostimulants for cognitive enhancement purposes is reported to be quite widespread in the USA. So the debate over the use of smart drugs is very much tied up with debates about how the behavioural and cognitive disorders for which these drugs are prescribed are diagnosed and what the causes of such conditions are.

(2) On the other hand, the philosophical-ethical debate around smart drugs need not be restricted to currently existing technologies. Broader issues at stake in the debate allow us to reflect on questions surrounding possible future cognitive enhancement technologies, and even much older ones. In this sense, the question about the use of smart drugs situates itself in a broader discussion about cognitive enhancement and enhancement in general.

The info is here.

Values in the Filter Bubble Ethics of Personalization Algorithms in Cloud Computing

Engin Bozdag and Job Timmermans
Delft University of Technology
Faculty of Technology, Policy and Management

Abstract

Cloud services such as Facebook and Google search started to use personalization algorithms in order to deal with growing amount of data online. This is often done in order to reduce the “information overload”. User’s interaction with the system is recorded in a single identity, and the information is personalized for the user using this identity. However, as we argue, such filters often ignore the context of information and they are never value neutral. These algorithms operate without the control and knowledge of the user, leading to a “filter bubble”. In this paper we use Value Sensitive Design methodology to identify the values and value assumptions implicated in personalization algorithms. By building on existing philosophical work, we discuss three human values implicated in personalized filtering: autonomy, identity, and transparency.

A copy of the paper is here.

Monday, May 27, 2019

How To Prevent AI Ethics Councils From Failing

uncaptionedManoj Saxena
www.forbes.com
Originally posted April 30, 2019

There's nothing “artificial” about Artificial Intelligence-infused systems. These systems are real and are already impacting everyone today though automated predictions and decisions. However, these digital brains may demonstrate unpredictable behaviors that can be disruptive, confusing, offensive, and even dangerous. Therefore, before getting started with an AI strategy, it’s critical for companies to consider AI through the lens of building systems you can trust.

Educate on the criticality of a ‘people and ethics first’ approach

AI systems often function in oblique, invisible ways that may harm the most vulnerable. Examples of such harm include loss of privacy, discrimination, loss of skills, economic impact, the security of critical infrastructure, and long-term effects on social well-being.

The ‘’technology and monetization first approach’’ to AI needs to evolve to a “people and ethics first” approach. Ethically aligned design is a set of societal and policy guidelines for the development of intelligent and autonomous systems to ensure such systems serve humanity’s values and ethical principles.

Multiple noteworthy organizations and countries have proposed guidelines for developing trusted AI systems going forward. These include the IEEE, World Economic Forum, the Future of Life Institute, Alan Turing Institute, AI Global, and the Government of Canada. Once you have your guidelines in place, you can then start educating everyone internally and externally about the promise and perils of AI and the need for an AI ethics council.

The info is here.

Sunday, May 26, 2019

Brain science should be making prisons better, not trying to prove innocence

Arielle Baskin-Sommers
theconversaton.com
Originally posted November 1, 2017

Here is an excerpt:

Unfortunately, when neuroscientific assessments are presented to the court, they can sway juries, regardless of their relevance. Using these techniques to produce expert evidence doesn’t bring the court any closer to truth or justice. And with a single brain scan costing thousands of dollars, plus expert interpretation and testimony, it’s an expensive tool out of reach for many defendants. Rather than helping untangle legal responsibility, neuroscience here causes an even deeper divide between the rich and the poor, based on pseudoscience.

While I remain skeptical about the use of neuroscience in the judicial process, there are a number of places where its findings could help corrections systems develop policies and practices based on evidence.

Solitary confinement harms more than helps

Take, for instance, the use within prisons of solitary confinement as a punishment for disciplinary infractions. In 2015, the Bureau of Justice reported that nearly 20 percent of federal and state prisoners and 18 percent of local jail inmates spent time in solitary.

Research consistently demonstrates that time spent in solitary increases the chances of persistent emotional trauma and distress. Solitary can lead to hallucinations, fantasies and paranoia; it can increase anxiety, depression and apathy as well as difficulties in thinking, concentrating, remembering, paying attention and controlling impulses. People placed in solitary are more likely to engage in self-mutilation as well as exhibit chronic rage, anger and irritability. The term “isolation syndrome” has even been coined to capture the severe and long-lasting effects of solitary.

The info is here.

Saturday, May 25, 2019

Lost-in-the-mall: False memory or false defense?

Ruth A. Blizard & Morgan Shaw (2019)
Journal of Child Custody
DOI: 10.1080/15379418.2019.1590285

Abstract

False Memory Syndrome (FMS) and Parental Alienation Syndrome (PAS) were developed as defenses for parents accused of child abuse as part of a larger movement to undermine prosecution of child abuse. The lost-in-the-mall study by Dr. Elizabeth Loftus concludes that an entire false memory can be implanted by suggestion. It has since been used to discredit abuse survivors’ testimony by inferring that false memories for childhood abuse can be implanted by psychotherapists. Examination of the research methods and findings of the study shows that no full false memories were actually formed. Similarly, PAS, coined by Richard Gardner, is frequently used in custody cases to discredit children’s testimony by alleging that the protective parent coached them to have false memories of abuse. There is no scientific research demonstrating the existence of PAS, and, in fact, studies on the suggestibility of children show that they cannot easily be persuaded to provide detailed disclosures of abuse.

The info is here.

Friday, May 24, 2019

Immutable morality: Even God could not change some moral facts

Madeline Reinecke & Zachary Horne
PsyArXiv
Last edited December 24, 2018

Abstract

The idea that morality depends on God is a widely held belief. This belief entails that the moral “facts” could be otherwise because, in principle, God could change them. Yet, some moral propositions seem so obviously true (e.g., the immorality of killing someone just for pleasure) that it is hard to imagine how they could be otherwise. In two experiments, we investigated people’s intuitions about the immutability of moral facts. Participants judged whether it was even possible, or possible for God, to change moral, logical, and physical facts. In both experiments, people judged that altering some moral facts was impossible—not even God could turn morally wrong acts into morally right acts. Strikingly, people thought that God could make physically impossible and logically impossible events occur. These results demonstrate the strength of people’s metaethical commitments and shed light on the nature of morality and its centrality to thinking and reasoning.

The research is here.

Holding Robots Responsible: The Elements of Machine Morality

Y. Bingman, A. Waytz, R Alterovitz, and K. Gray
Trends in Cognitive Science

Abstract


As robots become more autonomous, people will see them as more responsible for wrongdoing. Moral psychology suggests that judgments of robot responsibility will hinge on perceived situational awareness, intentionality, and free will—plus anthropomorphism and the robot’s capacity for harm. We also consider questions of robot rights and moral decision-making.

Here is an excerpt:

Philosophy, law, and modern cognitive science all reveal that judgments of human moral responsibility hinge on autonomy. This explains why children, who seem to have less autonomy than adults, are held less responsible for wrongdoing. Autonomy is also likely crucial in judgments of robot moral responsibility. The reason people ponder and debate the ethical implications of drones and self-driving cars (but not tractors or blenders) is because these machines can act autonomously.

Admittedly, today’s robots have limited autonomy, but it is an expressed goal of roboticists to develop fully autonomous robots—machine systems that can act without human input. As robots become more autonomous their potential for moral responsibility will only grow. Even as roboticists create robots with more “objective” autonomy, we note that “subjective” autonomy may be more important: work in cognitive science suggest that autonomy and moral responsibility are more matters of perception than objective truths.

The info can be downloaded here.

Thursday, May 23, 2019

Priming intuition disfavors instrumental harm but not impartial beneficence

Valerio Capraro, Jim Everett, & Brian Earp
PsyArXiv Preprints
Last Edited April 17, 2019

Abstract

Understanding the cognitive underpinnings of moral judgment is one of most pressing problems in psychological science. Some highly-cited studies suggest that reliance on intuition decreases utilitarian (expected welfare maximizing) judgments in sacrificial moral dilemmas in which one has to decide whether to instrumentally harm (IH) one person to save a greater number of people. However, recent work suggests that such dilemmas are limited in that they fail to capture the positive, defining core of utilitarianism: commitment to impartial beneficence (IB). Accordingly, a new two-dimensional model of utilitarian judgment has been proposed that distinguishes IH and IB components. The role of intuition on this new model has not been studied. Does relying on intuition disfavor utilitarian choices only along the dimension of instrumental harm or does it also do so along the dimension of impartial beneficence? To answer this question, we conducted three studies (total N = 970, two preregistered) using conceptual priming of intuition versus deliberation on moral judgments. Our evidence converges on an interaction effect, with intuition decreasing utilitarian judgments in IH—as suggested by previous work—but failing to do so in IB. These findings bolster the recently proposed two-dimensional model of utilitarian moral judgment, and point to new avenues for future research.

The research is here.

Pre-commitment and Updating Beliefs

Charles R. Ebersole
Doctoral Dissertation, University of Virginia

Abstract

Beliefs help individuals make predictions about the world. When those predictions are incorrect, it may be useful to update beliefs. However, motivated cognition and biases (notably, hindsight bias and confirmation bias) can instead lead individuals to reshape interpretations of new evidence to seem more consistent with prior beliefs. Pre-committing to a prediction or evaluation of new evidence before knowing its results may be one way to reduce the impact of these biases and facilitate belief updating. I first examined this possibility by having participants report predictions about their performance on a challenging anagrams task before or after completing the task. Relative to those who reported predictions after the task, participants who pre-committed to predictions reported predictions that were more discrepant from actual performance and updated their beliefs about their verbal ability more (Studies 1a and 1b). The effect on belief updating was strongest among participants who directly tested their predictions (Study 2) and belief updating was related to their evaluations of the validity of the task (Study 3). Furthermore, increased belief updating seemed to not be due to faulty or shifting memory of initial ratings of verbal ability (Study 4), but rather reflected an increase in the discrepancy between predictions and observed outcomes (Study 5). In a final study (Study 6), I examined pre-commitment as an intervention to reduce confirmation bias, finding that pre-committing to evaluations of new scientific studies eliminated the relation between initial beliefs and evaluations of evidence while also increasing belief updating. Together, these studies suggest that pre-commitment can reduce biases and increase belief updating in light of new evidence.

The dissertation is here.

Wednesday, May 22, 2019

Healthcare portraiture and unconscious bias

Karthik Sivashanker, Kathryn Rexrode, and others
BMJ 2019;365:l1668
Published April 12, 2019
https://doi.org/10.1136/bmj.l1668

Here is an excerpt:

Conveying the right message

In this regard, healthcare organisations have opportunities to instil a feeling of belonging and comfort for all their employees and patients. A simple but critical step is to examine the effect that their use of all imagery, as exemplified by portraits, has on their constituents. Are these portraits sufficiently conveying a message of social justice and equity? Do they highlight the achievement (as with a picture of a petri dish), or the person (a picture of Alexander Fleming without sufficient acknowledgment of his contributions)? Further still, do these images reveal the values of the organisation or its biases?

At our institution in Boston there was no question that the leaders depicted had made meaningful contributions to our hospital and healthcare. After soliciting feedback through listening sessions, open forums, and inbox feedback from our art committee, employees, clinicians, and students, however, our institution agreed to hang these portraits in their respective departments. This decision aimed to balance a commitment to equity with an intent to honourably display these portraits, which have inspired generations of physicians and scientists to be their best. It also led our social justice and equity committee to tackle problems like unconscious bias and diversity in hiring. In doing so, we are acknowledging the close interplay of symbolism and policy making in perpetuating racial and sex inequities, and the importance of tackling both together.

The info is here.

Why Behavioral Scientists Need to Think Harder About the Future

Ed Brandon
www.behavioralscientist.org
Originally published January 17, 2019

Here is an excerpt:

It’s true that any prediction made a century out will almost certainly be wrong. But thinking carefully and creatively about the distant future can sharpen our thinking about the present, even if what we imagine never comes to pass. And if this feels like we’re getting into the realms of (behavioral) science fiction, then that’s a feeling we should lean into. Whether we like it or not, futuristic visions often become shorthand for talking about technical concepts. Public discussions about A.I. safety, or automation in general, rarely manage to avoid at least a passing reference to the Terminator films (to the dismay of leading A.I. researchers). In the behavioral science sphere, plodding Orwell comparisons are now de rigueur whenever “government” and “psychology” appear in the same sentence. If we want to enrich the debate beyond an argument about whether any given intervention is or isn’t like something out of 1984, expanding our repertoire of sci-fi touch points can help.

As the Industrial Revolution picked up steam, accelerating technological progress raised the possibility that even the near future might look very different to the present. In the nineteenth century, writers such as Jules Verne, Mary Shelley, and H. G. Wells started to write about the new worlds that might result. Their books were not dry lists of predictions. Instead, they explored the knock-on effects of new technologies, and how ordinary people might react. Invariably, the most interesting bits of these stories were not the technologies themselves but the social and dramatic possibilities they opened up. In Shelley’s Frankenstein, there is the horror of creating something you do not understand and cannot control; in Wells’s War of the Worlds, peripeteia as humans get dislodged from the top of the civilizational food chain.

The info is here.

Tuesday, May 21, 2019

Bergen County psychologist charged with repeated sexual assaults of a child

Joe Brandt
www.nj.com
Originally posted April 18, 2019

A psychologist whose business works with children was charged Wednesday with multiple sexual assaults of a child under 13 years old.

Lorenzo Puertas, 78, faces two counts of sexual assault and one count of endangering the welfare of a child, Bergen County Prosecutor Dennis Calo announced Thursday.

Puertas, of Franklin Lakes, served as executive director of Psych-Ed Services, which has offices in Franklin Lakes and in Lakewood. The health provider officers bilingual psychological services including pre-employment psych screenings and child study team evaluations.

The info is here.

Moral Disengagement in the Corporate World

Jenny White M.Sc. M.P.H., Albert Bandura Ph.D. & Lisa A. Bero Ph.D.
(2009) Accountability in Research, 16:1, 41-74,
DOI: 10.1080/08989620802689847

Abstract

We analyze mechanisms of moral disengagement used to eliminate moral consequences by industries whose products or production practices are harmful to human health. Moral disengagement removes the restraint of self-censure from harmful practices. Moral self-sanctions can be selectively disengaged from harmful activities by investing them with socially worthy purposes, sanitizing and exonerating them, displacing and diffusing responsibility, minimizing or disputing harmful consequences, making advantageous comparisons, and disparaging and blaming critics and victims. Internal industry documents and public statements related to the research activities of these industries were coded for modes of moral disengagement by the tobacco, lead, vinyl chloride (VC), and silicosis-producing industries. All but one of the modes of moral disengagement were used by each of these industries. We present possible safeguards designed to protect the integrity of research.

A copy of the research is here.

Monday, May 20, 2019

How Drug Companies Helped Shape a Shifting Biological View of Mental Ilness

Terry Gross
NPR Health Shots
Originally posted May 2, 2019

Here are two excerpts:

On why the antidepressant market is now at a standstill

The huge developments that happen in the story of depression and the antidepressants happens in the late '90s, when a range of different studies increasingly seemed to suggest that these antidepressants — although they're helping a lot of people — when compared to placebo versions of themselves, don't seem to do much better. And that is not because they are not helping people, but because the placebos are also helping people. Simply thinking you're taking Prozac, I guess, can have a powerful effect on your state of depression. In order, though, for a drug to get on the market, it's got to beat the placebo. If it can't beat the placebo, the drug fails.

(cut)

On why pharmaceutical companies are leaving the psychiatric field

Because there have been no new good ideas as to where to look for new, novel biomarkers or targets since the 1960s. The only possible exception is there is now some excitement about ketamine, which targets a different set of biochemical systems. But R&D is very expensive. These drugs are now, mostly, off-patent. ... [The pharmaceutical companies'] efforts to bring on new drugs in that sort of tried-and-true and tested way — with a tinker here and a tinker there — has been running up against mostly unexplained but indubitable problems with the placebo effect.

The info is here.

Ethics board fines former Pa. judge for texts, sex with defendant's girlfriend

Mark Scolforo
The Associated Press
Originally published April 26, 2019

Pennsylvania's judicial ethics board fined a since-retired district judge $5,000 this week for having sex with the girlfriend of a defendant, sending her salacious texts and letting his own lawyer practice before him without telling the other parties.

The Court of Judicial Discipline fined former Bradford County District Judge Michael G. Shaw and issued a severe reprimand, saying he had appeared to be genuinely contrite.

Shaw spent 24 years as district judge in Sayre, but did not run for re-election in 2017, as he was being investigated.

The court says Shaw, who was not charged criminally, appeared to be genuinely remorseful for his conduct. His lawyer in the proceedings, William Hebe, did not return a phone message Friday.

Shaw is a former police officer and is not a lawyer. Pennsylvania magisterial district judges, who are elected, do not have to be licensed lawyers. District judges set bail and conduct preliminary hearings for serious crimes, and handle minor offenses and lower-level civil matters.

Court findings say Shaw was supervising treatment court in 2014 when a repeat DUI defendant he knew enrolled in the court program. Shaw had worked for the man's father years before.

The man's girlfriend subsequently told Shaw on Facebook that she was breaking up with the man, leading to a series of messages that became sexual in nature.

The info is here.

Sunday, May 19, 2019

House Democrats seek details of Trump ethics waivers

Kate Ackley
www.rollcall.com
Originally posted May 17, 2019

Rep. Elijah E. Cummings, chairman of the Oversight and Reform Committee, wants a status update on the state of the swamp in the Trump administration.

The Maryland Democrat launched an investigation late this week into the administration’s use of ethics waivers, which allow former lobbyists to work on matters they handled in their previous private sector jobs. Cummings sent letters to the White House and 24 agencies and Cabinet departments requesting copies of their ethics pledges and details of any waivers that could expose “potential conflicts of interest.”

“Although the White House committed to providing information on ethics waivers on its website, the White House has failed to disclose comprehensive information about the waivers,” Cummings wrote in a May 16 letter to White House counsel Pat Cipollone.

A White House official declined comment on the investigation, and a committee aide said the administration had not yet responded to the requests. A spokeswoman for Rep. Jim Jordan of Ohio, the top Republican on the Oversight panel, did not immediately provide a comment.

After President Donald Trump ran on a “drain the swamp” message, the Trump administration ushered in a tough-sounding ethics pledge through an executive order in January 2017 requiring officials to recuse themselves from participating in matters they had lobbied on in the previous two years. But the waivers allow appointees to circumvent those restrictions.

The info is here.

Saturday, May 18, 2019

The Neuroscience of Moral Judgment

Joanna Demaree-Cotton & Guy Kahane
Published in The Routledge Handbook of Moral Epistemology, eds. Karen Jones, Mark Timmons, and Aaron Zimmerman (Routledge, 2018).

Abstract:

This chapter examines the relevance of the cognitive science of morality to moral epistemology, with special focus on the issue of the reliability of moral judgments. It argues that the kind of empirical evidence of most importance to moral epistemology is at the psychological rather than neural level. The main theories and debates that have dominated the cognitive science of morality are reviewed with an eye to their epistemic significance.

1. Introduction

We routinely make moral judgments about the rightness of acts, the badness of outcomes, or people’s characters. When we form such judgments, our attention is usually fixed on the relevant situation, actual or hypothetical, not on our own minds. But our moral judgments are obviously the result of mental processes, and we often enough turn our attention to aspects of this process—to the role, for example, of our intuitions or emotions in shaping our moral views, or to the consistency of a judgment about a case with more general moral beliefs.

Philosophers have long reflected on the way our minds engage with moral questions—on the conceptual and epistemic links that hold between our moral intuitions, judgments, emotions, and motivations. This form of armchair moral psychology is still alive and well, but it’s increasingly hard to pursue it in complete isolation from the growing body of research in the cognitive science of morality (CSM). This research is not only uncovering the psychological structures that underlie moral judgment but, increasingly, also their neural underpinning—utilizing, in this connection, advances in functional neuroimaging, brain lesion studies, psychopharmacology, and even direct stimulation of the brain. Evidence from such research has been used not only to develop grand theories about moral psychology, but also to support ambitious normative arguments.

Friday, May 17, 2019

More than 300 overworked NHS nurses have died by suicide in just seven years

Lucy, a Liverpool student nurse, took her own life took years agoAlan Selby
The Mirror
Originally posted April 27, 2019

More than 300 nurses have taken their own lives in just seven years, shocking new figures reveal.

During the worst year, one was dying by suicide EVERY WEEK as Tory cuts began to bite deep into the NHS.

Today victims’ families call for vital early mental health training and support for young nurses – and an end to a “bullying and toxic culture” in the health service which leaves them afraid to ask for help in their darkest moments.

One mum – whose trainee nurse daughter Lucy de Oliveira killed herself while juggling other jobs to make ends meet – told us: “They’re working all hours God sends doing a really important job. Most of them would be better off working in McDonald’s. That can’t be right.”

Shadow Health Secretary Jonathan Ashworth has called for a government inquiry into the “alarming” figures – 23 per cent higher than the national average – from 2011 to 2017, the latest year on record.

“Every life lost is a desperate tragedy,” he said. “The health and wellbeing of NHS staff must never be compromised.”

The info is here.

Scientific Misconduct in Psychology: A Systematic Review of Prevalence Estimates and New Empirical Data

Johannes Stricker & Armin Günther
Zeitschrift fur Psychologie
Published online: March 29, 2019

Abstract

Spectacular cases of scientific misconduct have contributed to concerns about the validity of published results in psychology. In our systematic review, we identified 16 studies reporting prevalence estimates of scientific misconduct and questionable research practices (QRPs) in psychological research. Estimates from these studies varied due to differences in methods and scope. Unlike other disciplines, there was no reliable lower bound prevalence estimate of scientific misconduct based on identified cases available for psychology. Thus, we conducted an additional empirical investigation on the basis of retractions in the database PsycINFO. Our analyses showed that 0.82 per 10,000 journal articles in psychology were retracted due to scientific misconduct. Between the late 1990s and 2012, there was a steep increase. Articles retracted due to scientific misconduct were identified in 20 out of 22 PsycINFO subfields. These results show that measures aiming to reduce scientific misconduct should be promoted equally across all psychological subfields.

The research is here.


Thursday, May 16, 2019

It’s Our ‘Moral Responsibility’ to Give The FBI Access to Your DNA

Jennings Brown
www.gizmodo.com
Originally published April 3, 2019

A popular DNA-testing company seems to be targeting true crime fans with a new pitch to let them share their genetic information with law enforcement so cops can catch violent criminals.

Two months ago, FamilyTreeDNA raised privacy concerns after BuzzFeed revealed the company had partnered with the FBI and given the agency access to the genealogy database. Law enforcement’s use of DNA databases has been widely known since last April when California officials revealed genealogy website information was instrumental in determining the identity of the Golden State Killer. But in that case, detectives used publicly shared raw genetic data on GEDmatch. The recent news about FamilyTreeDNA marked the first known time a home DNA test company had willingly shared private genetic information with law enforcement.

Several weeks later, FamilyTreeDNA changed their rules to allow customers to block the FBI from accessing their information. “Users now have the ability to opt out of matching with DNA relatives whose accounts are flagged as being created to identify the remains of a deceased individual or a perpetrator of a homicide or sexual assault,” the company said in a statement at the time.

But now the company seems to be embracing this partnership with law enforcement with their new campaign called, “Families Want Answers.”

The info is here.

Memorial Sloan Kettering Leaders Violated Conflict-of-Interest Rules, Report Finds

Charles Ornstein and Katie Thomas
ProPublica.org
Originally posted April 4, 2019

Top officials at Memorial Sloan Kettering Cancer Center repeatedly violated policies on financial conflicts of interest, fostering a culture in which profits appeared to take precedence over research and patient care, according to details released on Thursday from an outside review.

The findings followed months of turmoil over executives’ ties to drug and health care companies at one of the nation’s leading cancer centers. The review, conducted by the law firm Debevoise & Plimpton, was outlined at a staff meeting on Thursday morning. It concluded that officials frequently violated or skirted their own policies; that hospital leaders’ ties to companies were likely considered on an ad hoc basis rather than through rigorous vetting; and that researchers were often unaware that some senior executives had financial stakes in the outcomes of their studies.

In acknowledging flaws in its oversight of conflicts of interest, the cancer center announced on Thursday an extensive overhaul of policies governing employees’ relationships with outside companies and financial arrangements — including public disclosure of doctors’ ties to corporations and limits on outside work.

The info is here.

Wednesday, May 15, 2019

Moral self-judgment is stronger for future than past actions

Sjåstad, H. & Baumeister, R.F.
Motiv Emot (2019).
https://doi.org/10.1007/s11031-019-09768-8

Abstract

When, if ever, would a person want to be held responsible for his or her choices? Across four studies (N = 915), people favored more extreme rewards and punishments for their future than their past actions. This included thinking that they should receive more blame and punishment for future misdeeds than for past ones, and more credit and reward for future good deeds than for past ones. The tendency to moralize the future more than the past was mediated by anticipating (one’s own) emotional reactions and concern about one’s reputation, which was stronger in the future as well. The findings fit the pragmatic view that people moralize the future partly to guide their choices and actions, such as by increasing their motivation to restrain selfish impulses and build long-term cooperative relationships with others. People typically believe that the future is open and changeable, while the past is not. We conclude that the psychology of moral accountability has a strong future component.

Here is a snip from Concluding Remarks

A recent article by Uhlmann, Pizarro, and Diermeier (2015) proposed an important shift in the foundation of moral psychology. Whereas most research has focused on how people judge moral actions, Uhlmann et al. proposed that the primary, focal purpose is to judge persons. They suggested that this has a prospective dimension: Ultimately, the pragmatic goal is to know whom one can cooperate with, rely on, and otherwise trust in the future. Judging past actions is a means toward predicting the future, with the focus on individual persons.

The present findings fit well with and even extend that analysis. The orientation toward the future is not limited to judging and predicting the moral character of others but also extends to oneself. If one functional purpose of morality is to promote group cohesion and cooperation in the future, people apparently think that part of that involves raising expectations and standards for their own future behavior as well.

The pre-print can be found here.

Students' Ethical Decision‐Making When Considering Boundary Crossings With Counselor Educators

Stephanie T. Burns
Counseling and Values
First published: 10 April 2019
https://doi.org/10.1002/cvj.12094

Abstract

Counselor education students (N = 224) rated 16 boundary‐crossing scenarios involving counselor educators. They viewed boundary crossings as unethical and were aware of power differentials between the 2 groups. Next, they rated the scenarios again, after reviewing 1 of 4 ethical informational resources: relevant standards in the ACA Code of Ethics (American Counseling Association, 2014), 2 different boundary‐crossing decision‐making models, and a placebo. Although participants rated all resources except the placebo as moderately helpful, these resources had little to no influence on their ethical decision‐making. Only 47% of students in the 2 ethical decision‐making model groups reported they would use the model they were exposed to in the future when contemplating boundary crossings.

Here is a portion from Implications for Practice and Training

Counselor education students took conservative stances toward the 16 boundary-crossing scenarios with counselor educators. These findings support results of previous researchers who stated that students struggle with even the smallest of boundary crossings (Kozlowski et al., 2014) because they understand that power differentials have implications for grades, evaluations, recommendation letters, and obtaining authentic skill development feedback (Gu et al., 2011). Counselor educators need to be aware that students find not providing appropriate feedback because of the counselor educator’s personal feelings toward the student, not providing students with required supervision time in practicum, and taking first authorship when the student performed all the work on the submission as being as abusive as having sex with a student.

The research is here.

Tuesday, May 14, 2019

Is ancient philosophy the future?

Donald Robertson
The Globe and Mail
Originally published April 19, 2019

Recently, a bartender in Nova Scotia showed me a quote from the Roman emperor Marcus Aurelius tattooed on his forearm. “Waste no more time arguing what a good man should be,” it said, “just be one.”

We live in an age when social media bombards everyone, especially the young, with advice about every aspect of their lives. Stoic philosophy, of which Marcus Aurelius was history’s most famous proponent, taught its followers not to waste time on diversions that don’t actually improve their character.

In recent decades, Stoicism has been experiencing a resurgence in popularity, especially among millennials. There has been a spate of popular self-help books that helped to spread the word. One of the best known is Ryan Holiday and Steven Hanselman’s The Daily Stoic, which introduced a whole new generation to the concept of philosophy, based on the classics, as a way of life. It has fuelled interest among Silicon Valley entrepreneurs. So has endorsement from self-improvement guru Tim Ferriss who describes Stoicism as the “ideal operating system for thriving in high-stress environments.”

Why should the thoughts of a Roman emperor who died nearly 2,000 years ago seem particularly relevant today, though? What’s driving this rebirth of Stoicism?

The info is here.

Who Should Decide How Algorithms Decide?

Mark Esposito, Terence Tse, Joshua Entsminger, and Aurelie Jean
Project-Syndicate
Originally published April 17, 2019

Here is an excerpt:

Consider the following scenario: a car from China has different factory standards than a car from the US, but is shipped to and used in the US. This Chinese-made car and a US-made car are heading for an unavoidable collision. If the Chinese car’s driver has different ethical preferences than the driver of the US car, which system should prevail?

Beyond culturally based differences in ethical preferences, one also must consider differences in data regulations across countries. A Chinese-made car, for example, might have access to social-scoring data, allowing its decision-making algorithm to incorporate additional inputs that are unavailable to US carmakers. Richer data could lead to better, more consistent decisions, but should that advantage allow one system to overrule another?

Clearly, before AVs take to the road en masse, we will need to establish where responsibility for algorithmic decision-making lies, be it with municipal authorities, national governments, or multilateral institutions. More than that, we will need new frameworks for governing this intersection of business and the state. At issue is not just what AVs will do in extreme scenarios, but how businesses will interact with different cultures in developing and deploying decision-making algorithms.

The info is here.

Monday, May 13, 2019

How has President Trump changed white Christians' views of 'morality'?

Brandon Showalter
The Christian Post
Originally published April 26, 2019

A notable shift has taken place within the past decade regarding how white evangelicals consider "morality" with regard to the politicians they support.

While the subject was frequently discussed during the 2016 election cycle in light of significant support then-candidate Donald Trump received from evangelical Christians, the attitude shift related to what an elected official does in his private life having any bearing on his public duties appears to have persisted over two years into his presidency, The Washington Post noted Thursday.

A 2011 Public Religion and Research Institute and Religion News Service poll found that 60 percent of white evangelicals believed that a public official who “commits an immoral act in their personal life” cannot still “behave ethically and fulfill their duties in their public and professional life.”

By October 2016, however, shortly after the release of the “Access Hollywood” tape in which President Trump was heard making lewd comments, another PRRI poll found that only 20 percent of white evangelicals answered the same question the same way.

No other religious demographic saw such a profound change.

The info is here.

How to Be an Ethical Leader: 4 Tips for Success

Sammi Caramela
www.businessnewsdaily.com
Originally posted August 27, 2018

Here is an excerpt:

Define and align your morals

Consider the values you had growing up – treat others how you want to be treated, always say "thank you," show support to those struggling, etc. But as you grow, and as society progresses, conventions change, often causing values to shift.

"This is the biggest challenge ethics face in our culture and at work, and the biggest challenge ethical leadership faces," said Matthew Kelly, founder and CEO of FLOYD Consulting and author of "The Culture Solution" (Blue Sparrow Books, 2019). "What used to be universally accepted as good and true, right and just, is now up for considerable debate. This environment of relativism makes it very difficult for values-based leaders."

Kelly added that to find success in ethical leadership, you should demonstrate how adhering to specific values benefits the mission of the organization.

"Culture is not a collection of personal preferences," he said. "Mission is king. When that ceases to be true, an organization has begun its journey toward the mediocre middle."

Ask yourself what matters to you as an individual and then align that with your priorities as a leader. Defining your morals not only expresses your authenticity, it encourages your team to do the same, creating a shared vision for all workers.

Hire those with similar ethics

While your ethics don't need to be the same as your workers', you should be able to establish common ground with them. This often starts with the hiring process and is maintained through a vision statement.

The info is here.

Sunday, May 12, 2019

Looking at the Mueller report from a mental health perspective

 Bandy X. Lee, Leonard L. Glass and Edwin B. Fisher
The Boston Globe
Updated May 9, 2019

Here is an excerpt:

These episodes demonstrate not only a lack of control over emotions but preoccupation with threats to the self. There is no room for consideration of national plans or policies, or his own role in bringing about his predicament and how he might change, but instead a singular focus on how he is a victim of circumstance and his familiar whining about unfairness.

This mindset can easily turn into rage reactions; it is commonly found in violent offenders in the criminal justice system, who perpetually consider themselves victims under attack, even as they perpetrate violence against others, often without provocation. In this manner, a “victim mentality” and paranoia are symptoms that carry a high risk of violence.

“We noted, among other things, that the president stated on more than 30 occasions that he ‘does not recall’ or ‘remember’ or have an ‘independent recollection’ of information called for by the questions. Other answers were ‘incomplete or imprecise.’ ” (Vol. II, p. C-1)

This response is from a president who, in public rallies, rarely lacks certainty, no matter how false his assertions and claims that he has “the world’s greatest memory” and “one of the great memories of all time.” His lack of recall is particularly meaningful in the context of his unprecedented mendacity, which alone is dangerous and divisive for the country. Whether he truly does not remember or is totally fabricating, either is pathological and highly dangerous in someone who has command over the largest military in the world and over thousands of nuclear weapons.

The Mueller report details numerous lies by the president, perhaps most clearly regarding his handling of the disclosure of the meeting at Trump Tower (Vol II, p. 98ff). First he denied knowing about the meeting, then described it as only about adoption, then denied crafting his son’s response, and then, in his formal response to Mueller, conceded that it was he who dictated the press release. Lying per se is not especially remarkable. Coupled with the other characteristics noted here, however, lying becomes a part of a pervasive, compelling, reflexive pattern of distraught gut reactions for handling challenges by misleading, manipulating, and blocking others’ access to the truth. Rather than being seen as bona fide alternatives, challenges are perceived as personal threats and responded to in a dangerous, no-holds-barred manner.

The info is here.

Saturday, May 11, 2019

Free Will, an Illusion? An Answer from a Pragmatic Sentimentalist Point of View

Maureen Sie
Appears in : Caruso, G. (ed.), June 2013, Exploring the Illusion of Free Will and Moral Responsibility, Rowman & Littlefield.

According to some people, diverse findings in the cognitive and neurosciences suggest that free will is an illusion: We experience ourselves as agents, but in fact our brains decide, initiate, and judge before ‘we’ do (Soon, Brass, Heinze and Haynes 2008; Libet and Gleason 1983). Others have replied that the distinction between ‘us’ and ‘our brains’ makes no sense (e.g., Dennett 2003)  or that scientists misperceive the conceptual relations that hold between free will and responsibility (Roskies 2006). Many others regard the neuro-scientific findings as irrelevant to their views on free will. They do not believe that determinist processes are incompatible with free will to begin with, hence, do not understand why deterministic processes in our brain would be (see Sie and Wouters 2008, 2010). That latter response should be understood against the background of the philosophical free will discussion. In philosophy, free will is traditionally approached as a metaphysical problem, one that needs to be dealt with in order to discuss the legitimacy of our practices of responsibility. The emergence of our moral practices is seen as a result of the assumption that we possess free will (or some capacity associated with it) and the main question discussed is whether that assumption is compatible with determinism.  In this chapter we want to steer clear from this 'metaphysical' discussion.

The question we are interested in in this chapter, is whether the above mentioned scientific findings are relevant to our use of the concept of free will when that concept is approached from a different angle. We call this different angle the 'pragmatic sentimentalist'-approach to free will (hereafter the PS-approach).  This approach can be traced back to Peter F. Strawson’s influential essay “Freedom and Resentment”(Strawson 1962).  Contrary to the metaphysical approach, the PS-approach does not understand free will as a concept that somehow precedes our moral practices. Rather it is assumed that everyday talk of free will naturally arises in a practice that is characterized by certain reactive attitudes that we take towards one another. This is why it is called 'sentimentalist.' In this approach, the practical purposes of the concept of free will are put central stage. This is why it is called 'pragmatist.'

A draft of the book chapter can be downloaded here.

Friday, May 10, 2019

Privacy, data science and personalised medicine. Time for a balanced discussion

Claudia Pagliari
LinkedIn.com Post
Originally posted March 26, 2019

There are several fundamental truths that those of us working at the intersection of data science, ethics and medical research have recognised for some time. Firstly that 'anonymised’ and ‘pseudonymised' data can potentially be re-identified through the convergence of related variables, coupled with clever inference methods (although this is by no means easy). Secondly that genetic data is not just about individuals but also about families and generations, past and future. Thirdly, as we enter an increasingly digitized society where transactional, personal and behavioural data from public bodies, businesses, social media, mobile devices and IoT are potentially linkable, the capacity of data to tell meaningful stories about us is becoming constrained only by the questions we ask and the tools we are able to deploy to get the answers. Some would say that privacy is an outdated concept, and control and transparency are the new by-words. Others either disagree or are increasingly confused and disenfranchised.

Some of the quotes from the top brass of Iceland’s DeCODE Genetics, appearing in today’s BBC’s News, neatly illustrate why we need to remain vigilant to the ethical dilemmas presented by the use of data sciences for personalised medicine. For those of you who are not aware, this company has been at the centre of innovation in population genomics since its inception in the 1990s and overcame a state outcry over privacy and consent, which led to its temporary bankruptcy, before rising phoenix-like from the ashes. The fact that its work has been able to continue in an era of increasing privacy legislation and regulation shows just how far the promise of personalized medicine has skewed the policy narrative and the business agenda in recent years. What is great about Iceland, in terms of medical research, is that it is a relatively small country that has been subjected to historically low levels of immigration and has a unique family naming system and good national record keeping, which means that the pedigree of most of its citizens is easy to trace. This makes it an ideal Petri dish for genetic researchers. And here’s where the rub is. In short, by fully genotyping only 10,000 people from this small country, with its relatively stable gene pool, and integrating this with data on their family trees - and doubtless a whole heap of questionnaires and medical records - the company has, with the consent of a few, effectively seized the data of the "entire population".

The info is here.


An Evolutionary Perspective On Free Will Belief

Cory Clark & Bo Winegard
Science Trends
Originally posted April 9, 2019

Here is an excerpt:

Both scholars and everyday people seem to agree that free will (whatever it is) is a prerequisite for moral responsibility (though note, among philosophers, there are numerous definitions and camps regarding how free will and moral responsibility are linked). This suggests that a crucial function of free will beliefs is the promotion of holding others morally responsible. And research supports this. Specifically, when people are exposed to another’s harmful behavior, they increase their broad beliefs in the human capacity for free action. Thus, believing in free will might facilitate the ability of individuals to punish harmful members of the social group ruthlessly.

But recent research suggests that free will is about more than just punishment. People might seek morally culpable agents not only when desiring to punish, but also when desiring to praise. A series of studies by Clark and colleagues (2018) found that, whereas people generally attributed more free will to morally bad actions than to morally good actions, they attributed more free will to morally good actions than morally neutral ones. Moreover, whereas free will judgments for morally bad actions were primarily driven by affective desires to punish, free will judgments for morally good actions were sensitive to a variety of characteristics of the behavior.

Thursday, May 9, 2019

The 'debate of the century': what happened when Jordan Peterson debated Slavoj Žižek

Stephen Marche
The Guardian
Originally published April 20, 2019

Here is an excerpt:

The great surprise of this debate turned out to be how much in common the old-school Marxist and the Canadian identity politics refusenik had.

One hated communism. The other hated communism but thought that capitalism possessed inherent contradictions. The first one agreed that capitalism possessed inherent contradictions. And that was basically it. They both wanted the same thing: capitalism with regulation, which is what every sane person wants. The Peterson-Žižek encounter was the ultra-rare case of a debate in 2019 that was perhaps too civil.

They needed enemies, needed combat, because in their solitudes, they had so little to offer. Peterson is neither a racist nor a misogynist. He is a conservative. He seemed, in person, quite gentle. But when you’ve said that, you’ve said everything. Somehow hectoring mobs have managed to turn him into an icon of all they are not. Remove him from his enemies and he is a very poor example of a very old thing – the type of writer whom, from Samuel Smiles’ Self-Help to Eckhart Tolle’s The Power of Now, have promised simple answers to complex problems. Rules for Life, as if there were such things.

The info is here.

The moral behavior of ethics professors: A replication-extension in German-speaking countries

Philipp Schönegger & Johannes Wagner
(2019) Philosophical Psychology, 32:4, 532-559
DOI: 10.1080/09515089.2019.1587912

Abstract

What is the relation between ethical reflection and moral behavior? Does professional reflection on ethical issues positively impact moral behaviors? To address these questions, Schwitzgebel and Rust empirically investigated if philosophy professors engaged with ethics on a professional basis behave any morally better or, at least, more consistently with their expressed values than do non-ethicist professors. Findings from their original US-based sample indicated that neither is the case, suggesting that there is no positive influence of ethical reflection on moral action. In the study at hand, we attempted to cross-validate this pattern of results in the German-speaking countries and surveyed 417 professors using a replication-extension research design. Our results indicate a successful replication of the original effect that ethicists do not behave any morally better compared to other academics across the vast majority of normative issues. Yet, unlike the original study, we found mixed results on normative attitudes generally. On some issues, ethicists and philosophers even expressed more lenient attitudes. However, one issue on which ethicists not only held stronger normative attitudes but also reported better corresponding moral behaviors was vegetarianism.

Wednesday, May 8, 2019

Billions spent rebuilding Notre Dame shows lack of morality among wealthy

Gillian Fulford
Indiana Daily News Column
Originally posted April 23, 2019

Here is an excerpt:

Estimates to end world hunger are between $7 and $265 billion a year, and surely with 2,208 billionaires in the world, a few hundred could spare some cash to help ensure people aren’t starving to death. There aren’t billionaires in the news rushing to give money toward food aid, but even the richest man in Europe donated to repair the church.

Repairing churches is not a life and death matter. Churches, while culturally and religiously significant, are not necessary for life in the way that nutritious food is. Being an absurdly wealthy person who only donates money for things you find aesthetically pleasing is morally bankrupt in a world where money could literally fund the end of world hunger.

This isn’t to say that rebuilding the Notre Dame is bad — preserving culturally significant places is important. But the Roman Catholic Church is the richest religious organization in the world — it can probably manage repairing a church without the help of wealthy donors.

At a time when there are heated protests in the streets of France over taxes that unfairly effect the poor, pledging money toward buildings seems fraught. Spending billions on unnecessary buildings is a slap in the face to French people fighting for equitable wealth and tax distribution.

The info is here.

The ethics of emerging technologies

artificial intelligenceJessica Hallman
www.techxplore.com
Originally posted April 10, 2019


Here is an excerpt:

"There's a new field called data ethics," said Fonseca, associate professor in Penn State's College of Information Sciences and Technology and 2019-2020 Faculty Fellow in Penn State's Rock Ethics Institute. "We are collecting data and using it in many different ways. We need to start thinking more about how we're using it and what we're doing with it."

By approaching emerging technology with a philosophical perspective, Fonseca can explore the ethical dilemmas surrounding how we gather, manage and use information. He explained that with the rise of big data, for example, many scientists and analysts are foregoing formulating hypotheses in favor of allowing data to make inferences on particular problems.

"Normally, in science, theory drives observations. Our theoretical understanding guides both what we choose to observe and how we choose to observe it," Fonseca explained. "Now, with so much data available, science's classical picture of theory-building is under threat of being inverted, with data being suggested as the source of theories in what is being called data-driven science."

Fonseca shared these thoughts in his paper, "Cyber-Human Systems of Thought and Understanding," which was published in the April 2019 issue of the Journal of the Association of Information Sciences and Technology. Fonseca co-authored the paper with Michael Marcinowski, College of Liberal Arts, Bath Spa University, United Kingdom; and Clodoveu Davis, Computer Science Department, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil.

In the paper, the researchers propose a concept to bridge the divide between theoretical thinking and a-theoretical, data-driven science.

The info is here.

Tuesday, May 7, 2019

Ethics Alone Can’t Fix Big Tech

Daniel Susser
Slate.com
Originally posted April 17, 2019

Here is an excerpt:

At a deeper level, these issues highlight problems with the way we’ve been thinking about how to create technology for good. Desperate for anything to rein in otherwise indiscriminate technological development, we have ignored the different roles our theoretical and practical tools are designed to play. With no coherent strategy for coordinating them, none succeed.

Consider ethics. In discussions about emerging technologies, there is a tendency to treat ethics as though it offers the tools to answer all values questions. I suspect this is largely ethicists’ own fault: Historically, philosophy (the larger discipline of which ethics is a part) has mostly neglected technology as an object of investigation, leaving that work for others to do. (Which is not to say there aren’t brilliant philosophers working on these issues; there are. But they are a minority.) The result, as researchers from Delft University of Technology and Leiden University in the Netherlands have shown, is that the vast majority of scholarly work addressing issues related to technology ethics is being conducted by academics trained and working in other fields.

This makes it easy to forget that ethics is a specific area of inquiry with a specific purview. And like every other discipline, it offers tools designed to address specific problems. To create a world in which A.I. helps people flourish (rather than just generate profit), we need to understand what flourishing requires, how A.I. can help and hinder it, and what responsibilities individuals and institutions have for creating technologies that improve our lives. These are the kinds of questions ethics is designed to address, and critically important work in A.I. ethics has begun to shed light on them.

At the same time, we also need to understand why attempts at building “good technologies” have failed in the past, what incentives drive individuals and organizations not to build them even when they know they should, and what kinds of collective action can change those dynamics. To answer these questions, we need more than ethics. We need history, sociology, psychology, political science, economics, law, and the lessons of political activism. In other words, to tackle the vast and complex problems emerging technologies are creating, we need to integrate research and teaching around technology with all of the humanities and social sciences.

The info is here.

Are Placebo-Controlled, Relapse Prevention Trials in Schizophrenia Research Still Necessary or Ethical?

Ryan E. Lawrence, Paul S. Appelbaum, Jeffrey A. Lieberman
JAMA Psychiatry. Published online April 10, 2019.
doi:10.1001/jamapsychiatry.2019.0275

Randomized, placebo-controlled trials have been the gold standard for evaluating the safety and efficacy of new psychotropic drugs for more than half a century. Although the US Food and Drug Administration (FDA) does not require placebo-controlled trial data to approve new drugs or marketing indications, they have become the industry standard for psychotropic drug development.

Placebos are controversial. The FDA guidelines state “when a new treatment is tested for a condition for which no effective treatment is known, there is usually no ethical problem with a study comparing the new treatment to placebo.”1 However, “in cases where an available treatment is known to prevent serious harm, such as death or irreversible morbidity, it is generally inappropriate to use a placebo control”. When new antipsychotics are developed for schizophrenia, it can be debated which guideline applies.

From the Conclusion:

We believe the time has come to cease the use of placebo in relapse prevention studies and encourage the use of active comparators that would protect patients from relapse and provide information on the comparative effectiveness of the drugs studied. We recommend that pharmaceutical companies not seek maintenance labeling if it would require placebo-controlled, relapse prevention trials. However, for putative antipsychotics with a novel mechanism of action, placebo-controlled, relapse prevention trials may still be justifiable.

The info is here.

Monday, May 6, 2019

How do we make moral decisions?

Dartmouth College
Press Release
Originally released April 18, 2019

When it comes to making moral decisions, we often think of the golden rule: do unto others as you would have them do unto you. Yet, why we make such decisions has been widely debated. Are we motivated by feelings of guilt, where we don't want to feel bad for letting the other person down? Or by fairness, where we want to avoid unequal outcomes? Some people may rely on principles of both guilt and fairness and may switch their moral rule depending on the circumstances, according to a Radboud University - Dartmouth College study on moral decision-making and cooperation. The findings challenge prior research in economics, psychology and neuroscience, which is often based on the premise that people are motivated by one moral principle, which remains constant over time. The study was published recently in Nature Communications.

"Our study demonstrates that with moral behavior, people may not in fact always stick to the golden rule. While most people tend to exhibit some concern for others, others may demonstrate what we have called 'moral opportunism,' where they still want to look moral but want to maximize their own benefit," said lead author Jeroen van Baar, a postdoctoral research associate in the department of cognitive, linguistic and psychological sciences at Brown University, who started this research when he was a scholar at Dartmouth visiting from the Donders Institute for Brain, Cognition and Behavior at Radboud University.

"In everyday life, we may not notice that our morals are context-dependent since our contexts tend to stay the same daily. However, under new circumstances, we may find that the moral rules we thought we'd always follow are actually quite malleable," explained co-author Luke J. Chang, an assistant professor of psychological and brain sciences and director of the Computational Social Affective Neuroscience Laboratory (Cosan Lab) at Dartmouth. "This has tremendous ramifications if one considers how our moral behavior could change under new contexts, such as during war," he added.

The info is here.

The research is here.

Ethical Considerations Regarding Internet Searches for Patient Information.

Charles C. Dike, Philip Candilis, Barbara Kocsis  and others
Psychiatric Services
Published Online:17 Jan 2019

Abstract

In 2010, the American Medical Association developed policies regarding professionalism in the use of social media, but it did not present specific ethical guidelines on targeted Internet searches for information about a patient or the patient’s family members. The American Psychiatric Association (APA) provided some guidance in 2016 through the Opinions of the Ethics Committee, but published opinions are limited. On behalf of the APA Ethics Committee, the authors developed a resource document describing ethical considerations regarding Internet and social media searches for patient information, from which this article has been adapted. Recommendations include the following. Except in emergencies, it is advisable to obtain a patient’s informed consent before performing such a search. The psychiatrist should be aware of his or her motivations for performing a search and should avoid doing so unless it serves the patient’s best interests. Information obtained through such searches should be handled with sensitivity regarding the patient’s privacy. The psychiatrist should consider how the search might influence the clinician-patient relationship. When interpreted with caution, Internet- and social media–based information may be appropriate to consider in forensic evaluations.

The info is here.

Sunday, May 5, 2019

When a Colleague Dies, CEOs Change How They Lead

Guoli Chen
www.barrons.com
Originally posted April 8, 2019

Here is an excerpt:

A version of my research, “That Could Have Been Me: Director Deaths, CEO Mortality Salience and Corporate Prosocial Behavior” (co-authored with Craig Crossland and Sterling Huang and forthcoming in Management Science) notes the significant impact a director’s death can have on resource allocation within a firm and on CEO’s activities, both outside and inside the organization.

For example, we saw that CEOs who’d experienced the death of a director on their boards reduced the number of outside directorships they held in the publicly listed firms. At the same time, they increased their number of directorships in non-profit organizations. It seems that thoughts of mortality had inspired a desire to make a lasting, positive contribution to society, or to jettison some priorities in favor of more pro-social ones.

We also saw differences in how CEOs led their firms. In our study, which looked at statistics taken from public firms where a director had died in the years between 1990 and 2013 and compared them with similar firms where no director had died, we saw that CEOs who’d experienced the death of a close colleague spend less efforts on the firms’ immediate growth or financial return activities. We found that there is an increase of costs-of-goods-sold, and companies they lead become less aggressive in expanding their assets and firm size, after the director death events. It could be due to the “quiet life” or “withdrawal behavior” hypotheses which suggest that CEOs become less engaged with the corporate activities after they realize the finite of life span. They may shift their time and focus from corporate to family or community activities.

Meanwhile we also observed that firms lead by these CEOs after the director death experienced an increase their corporate social responsibility (CSR) activities. CEOs with a heightened awareness of deaths will influence their firms resource allocation towards activities that provide benefits to broader stakeholders, such as employee health plans, more environmentally-friendly manufacturing processes, and charitable contributions.

The info is here.

Saturday, May 4, 2019

Moral Grandstanding in Public Discourse

Joshua Grubbs, Brandon Warmke, Justin Tosi, & Alicia James
PsyArXiv Preprints
Originally posted April 5, 2019

Abstract

Public discourse is often caustic and conflict-filled. This trend seems to be particularly evident when the content of such discourse is around moral issues (broadly defined) and when the discourse occurs on social media. Several explanatory mechanisms for such conflict have been explored in recent psychological and social-science literatures. The present work sought to examine a potentially novel explanatory mechanism defined in philosophical literature: Moral Grandstanding. According to philosophical accounts, Moral Grandstanding is the use of moral talk to seek social status. For the present work, we conducted five studies, using two undergraduate samples (Study 1, N = 361; Study 2, N = 356); an sample matched to U.S. norms for age, gender, race, income, Census region (Study 3, N = 1,063); a YouGov sample matched to U.S. demographic norms (Study 4, N = 2,000); and a brief, one-month longitudinal study of Mechanical Turk workers in the U.S. (Study 5 , Baseline N = 499, follow-up n = 296). Across studies, we found initial support for the validity of Moral Grandstanding as a construct. Specifically, moral grandstanding was associated with status-seeking personality traits, as well as greater political and moral conflict in daily life.

Here is part of the Conclusion:

Public discourse regarding morally charged topics is prone to conflict and polarization, particularly on social media platforms that tend to facilitate ideological echo chambers. The present study introduces an interdisciplinary construct called Moral Grandstanding as possible a contributing factor to this phenomenon. MG links evolutionary psychology and moral philosophy to describe the use of public moral speech to enhance one’s status or image in the eyes of others. Specifically, MG is framed as an expression of status-seeking drives in the domain of public discourse. Self-reported motivations underlying grandstanding behaviors seem to be consistent with the construct of status-seeking more broadly, seeming to represent prestige and dominance striving, both of which were found to be associated with greater interpersonal conflict and polarization.

The research is here.

Friday, May 3, 2019

Real or artificial? Tech titans declare AI ethics concerns

Matt O'Brien and Rachel Lerman
Associated Press
Originally posted April 7, 2019

Here is an excerpt:

"Ethical AI" has become a new corporate buzz phrase, slapped on internal review committees, fancy job titles, research projects and philanthropic initiatives. The moves are meant to address concerns over racial and gender bias emerging in facial recognition and other AI systems, as well as address anxieties about job losses to the technology and its use by law enforcement and the military.

But how much substance lies behind the increasingly public ethics campaigns? And who gets to decide which technological pursuits do no harm?

Google was hit with both questions when it formed a new board of outside advisers in late March to help guide how it uses AI in products. But instead of winning over potential critics, it sparked internal rancor. A little more than a week later, Google bowed to pressure from the backlash and dissolved the council.

The outside board fell apart in stages. One of the board's eight inaugural members quit within days and another quickly became the target of protests from Google employees who said her conservative views don't align with the company's professed values.

As thousands of employees called for the removal of Heritage Foundation President Kay Coles James, Google disbanded the board last week.

"It's become clear that in the current environment, (the council) can't function as we wanted," the company said in a statement.

The info is here.

Fla. healthcare executive found guilty in $1B Medicare fraud case

Associated Press 
Modern Healthcare
Originally published April 5, 2019

Florida healthcare executive Philip Esformes was found guilty Friday of paying and receiving kickbacks and other charges as part of the biggest Medicare fraud case in U.S. history.

During the seven-week trial in federal court in Miami, prosecutors called Esformes a trickster and mastermind of a scheme paying bribes and kickbacks to doctors to refer patients to his nursing home network from 2009 to 2016. The fraud also included paying off a regulator to learn when inspectors would make surprise visits to his facilities, or if patients had made complaints.

Esformes owns dozens of Miami-Dade nursing facilities as well as homes in Miami, Los Angeles and Chicago.

The info is here.

Thursday, May 2, 2019

A Facebook request: Write a code of tech ethics

A Facebook request: Write a code of tech ethicsMike Godwin
www.latimes.com
Originally published April 30, 2019

Facebook is preparing to pay a multi-billion-dollar fine and dealing with ongoing ire from all corners for its user privacy lapses, the viral transmission of lies during elections, and delivery of ads in ways that skew along gender and racial lines. To grapple with these problems (and to get ahead of the bad PR they created), Chief Executive Mark Zuckerberg has proposed that governments get together and set some laws and regulations for Facebook to follow.

But Zuckerberg should be aiming higher. The question isn’t just what rules should a reformed Facebook follow. The bigger question is what all the big tech companies’ relationships with users should look like. The framework needed can’t be created out of whole cloth just by new government regulation; it has to be grounded in professional ethics.

Doctors and lawyers, as they became increasingly professionalized in the 19th century, developed formal ethical codes that became the seeds of modern-day professional practice. Tech-company professionals should follow their example. An industry-wide code of ethics could guide companies through the big questions of privacy and harmful content.

The info is here.

Editor's note: Many social media companies engage in unethical behavior on a regular basis, typically revolving around lack of consent, lack of privacy standards, filter bubble (personalized algorithms) issues, lack of accountability, lack of transparency, harmful content, and third party use of data.

Part-revived pig brains raise slew of ethical quandaries

Nita A. Farahany, Henry T. Greely & Charles M. Giattino
Nature
Originally published April 17, 2019

Scientists have restored and preserved some cellular activities and structures in the brains of pigs that had been decapitated for food production four hours before. The researchers saw circulation in major arteries and small blood vessels, metabolism and responsiveness to drugs at the cellular level and even spontaneous synaptic activity in neurons, among other things. The team formulated a unique solution and circulated it through the isolated brains using a network of pumps and filters called BrainEx. The solution was cell-free, did not coagulate and contained a haemoglobin-based oxygen carrier and a wide range of pharmacological agents.

The remarkable study, published in this week’s Nature, offers the promise of an animal or even human whole-brain model in which many cellular functions are intact. At present, cells from animal and human brains can be sustained in culture for weeks, but only so much can be gleaned from isolated cells. Tissue slices can provide snapshots of local structural organization, yet they are woefully inadequate for questions about function and global connectivity, because much of the 3D structure is lost during tissue preparation.

The work also raises a host of ethical issues. There was no evidence of any global electrical activity — the kind of higher-order brain functioning associated with consciousness. Nor was there any sign of the capacity to perceive the environment and experience sensations. Even so, because of the possibilities it opens up, the BrainEx study highlights potential limitations in the current regulations for animals used in research.

The info is here.

Wednesday, May 1, 2019

Chinese scientists create super monkeys by injecting brains with human DNA

Harriet Brewis
www.msn.com
Originally published April 13, 2019

Chinese scientists have created super-intelligent monkeys by injecting them with human DNA.

Researchers transferred a gene linked to brain development, called MCPH1, into rhesus monkey embryos.

Once they were born, the monkeys were found to have better memories, reaction times and processing abilities than their untouched peers.

"This was the first attempt to understand the evolution of human cognition using a transgenic monkey model," said Bing Su, a geneticist at Kunming Institute of Zoology in China.

The research was conducted by Dr Su’s team at the Kunming Institute of Zoology, in collaboration with the Chinese Academy of Sciences and University of North Carolina in the US.

“Our findings demonstrated that nonhuman primates (excluding ape species) have the potential to provide important – and potentially unique – insights into basic questions of what actually makes human unique,” the authors wrote in the study.

The info is here.