Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, December 24, 2020

Google Employees Call Black Scientist's Ouster 'Unprecedented Research Censorship'

Bobby Allyn
www.npr.org
Originally published 3 Dec 20

Hundreds of Google employees have published an open letter following the firing of an accomplished scientist known for her research into the ethics of artificial intelligence and her work showing racial bias in facial recognition technology.

That scientist, Timnit Gebru, helped lead Google's Ethical Artificial Intelligence Team until Tuesday.

Gebru, who is Black, says she was forced out of the company after a dispute over a research paper and an email she subsequently sent to peers expressing frustration over how the tech giant treats employees of color and women.

"Instead of being embraced by Google as an exceptionally talented and prolific contributor, Dr. Gebru has faced defensiveness, racism, gaslighting, research censorship, and now a retaliatory firing," the open letter said. By Thursday evening, more than 400 Google employees and hundreds of outsiders — many of them academics — had signed it.

The research paper in question was co-authored by Gebru along with four others at Google and two other researchers. It examined the environmental and ethical implications of an AI tool used by Google and other technology companies, according to NPR's review of the draft paper.

The 12-page draft explored the possible pitfalls of relying on the tool, which scans massive amounts of information on the Internet and produces text as if written by a human. The paper argued it could end up mimicking hate speech and other types of derogatory and biased language found online. The paper also cautioned against the energy cost of using such large-scale AI models.

According to Gebru, she was planning to present the paper at a research conference next year, but then her bosses at Google stepped in and demanded she retract the paper or remove all the Google employees as authors.

Wednesday, December 23, 2020

Beyond burnout: For health care workers, this surge of Covid-19 is bringing burnover

Wendy Dean & Simon G. Talbot
statnews.com
Originally posted 25 Nov 20

Covid-19 is roaring back for a third wave. The first two substantially increased feelings of moral injury and burnout among health care workers. This one is bringing burnover.

Health care systems are scrambling anew. The crises of ICU beds at capacity, shortages of personal protective equipment, emergency rooms turning away ambulances, and staff shortages are happening this time not in isolated hot spots but in almost every state. Clinicians again face work that is risky, heart-rending, physically exhausting, and demoralizing, all the elements of burnout. They have seen this before and are intensely frustrated it is happening again.

Too many of them are leaving health care long before retirement. The disconnect between what health care workers know and how the public is behaving, driven by relentless disinformation, is unbearable. Paraphrasing a colleague, “How can they call us essential and then treat us like we are disposable?”

It is time for leaders of hospitals and health care systems to add another, deeper layer of support for their staff by speaking out publicly and collectively in defense of science, safety, and public health, even if it risks estranging patients and politicians.

Long before the pandemic emerged, the relationships between health care organizations and their staffs were already strained by years of cost-cutting that trimmed staffing levels, supplies, and space to the bone. Driven by changes in health care reimbursement structures, systems were “optimized” to the point that they were continually running at what felt like full capacity, with precious little slack to accommodate minor surges, much less one the magnitude of a global pandemic.

Tuesday, December 22, 2020

Examining the asymmetry in judgments of racism in self and others

Angela C. Bell, Melissa Burkley, & 
Jarrod Bock (2019)
The Journal of Social Psychology, 159:5, 611-627.
DOI: 10.1080/00224545.2018.1538930

Abstract

Across three experiments, participants were provided with a list of racist behaviors that purportedly were enacted from a fellow student but in fact were based on the participants’ own behaviors. People consistently evaluated themselves as less racist than this comparison other, even though this other’s racist behaviors were identical to their own. Studies 2a and 2b demonstrate this effect is quite robust and even occurs under social pressure and social consensus conditions in which participants were free to express their racial biases. Thus, it appears that people are less likely to base their racist trait ratings on behavioral evidence when evaluating themselves compared to when they are evaluating another. Taken together, this work provides evidence for the consistency and robustness of self-enhanced social comparisons as applied to the trait domain of racism. Further, this work sheds insight into why people deny they are racist when they act racist.

General discussion

The present work provides evidence for the consistency and robustness of the biased self-enhanced evaluations of racism. Across three experiments, participants received a list of racist behaviors that purportedly were enacted from a fellow student but in fact were based on the participants’ own behaviors. People consistently evaluated themselves as less racist than this comparison other, even though this other’s racist behaviors were identical to their own. Studies2a and 2b demonstrate this effect is quite robust and even occurs under conditions in which participants feel free to express their racial biases. Taken together, this work suggests that people are less likely to base their racist trait ratings on behavioral evidence when evaluating themselves compared to when they are evaluating another. By doing so, people are able to maintain the self-perception that they are not racist even in the face of contradictory behavioral evidence (i.e., people are less racist than themselves).

(I emphasized this last sentence.)

Monday, December 21, 2020

Physicians' Ethics Change With Societal Trends

Batya S. Yasgur
MedScape.com
Originally posted 23 Nov 20

Here is an excerpt:

Are Romantic Relationships With Patients Always Off Limits?

Medscape asked physicians whether it was acceptable to become romantically or sexually involved with a patient. Compared to 2010, in 2020, many more respondents were comfortable with having a relationship with a former patient after 6 months had elapsed. In 2020, 2% said they were comfortable having a romance with a current patient; 26% were comfortable being romantic with a person who had stopped being a patient 6 months earlier, but 62% said flat-out 'no' to the concept. In 2010, 83% said "no" to the idea of dating a patient; fewer than 1% agreed that dating a current patient was acceptable, and 12% said it was okay after 6 months.

Some respondents felt strongly that romantic or sexual involvement is always off limits, even months or years after the physician is no longer treating the patient. "Once a patient, always a patient," wrote a psychiatrist.

On the other hand, many respondents thought being a "patient" was not a lifelong status. An orthopedic surgeon wrote, "After 6 months, they are no longer your patient." Several respondents said involvement was okay if the physician stopped treating the patient and referred the patient to another provider. Others recommended a longer wait time.

"Although most doctors have traditionally kept their personal and professional lives separate, they are no longer as bothered by bending of boundaries and have found a zone of acceptability in the 6-month waiting period," Goodman said.

Packer added that the "greater relaxation of sexual standards and boundaries in general" might have had a bearing on survey responses because "doctors are part of those changing societal norms."

Evans suggested that the rise of individualism and autonomy partially accounts for the changing attitudes toward physician-patient (or former patient) relationships. "Being prohibited from having a relationship with a patient or former patient is increasingly being seen as an infringement on civil liberties and autonomy, which is a major theme these days."

Sunday, December 20, 2020

Choice blindness: Do you know yourself as well as you think?

David Edmonds
BBC.com
Originally published 3 Oct 20

Here is an excerpt:

Clearly we lack self-knowledge about our motives and choices. But so what? What are the implications of this research?

Well, perhaps one general point is that we should learn to be more tolerant of people who change their minds. We tend to have very sensitive antennae for inconsistency - be this inconsistency in a partner, who's changed their mind on whether they fancy an Italian or an Indian meal, or a politician who's backed one policy in the past and now supports an opposing position. But as we often don't have a clear insight into why we choose what we choose, we should surely be given some latitude to switch our choices.

There may also be more specific implications for how we navigate through our current era - a period in which there is growing cultural and political polarisation. It would be natural to believe that those who support a left-wing or right-wing party do so because they're committed to that party's ideology: they believe in free markets or, the opposite, in a larger role for the state. But Petter Johansson's work suggests that our deeper commitment is not to particular policies, since, using his switching technique, we can be persuaded to endorse all sorts of policies. Rather, "we support a label or a team".

That is to say, we're liable to overestimate the extent to which a Trump supporter - or a Biden supporter - backs his or her candidate because of the policies the politician promotes. Instead, someone will be Team Trump, or Team Biden. A striking example of this was in the last US election. Republicans have traditionally been pro-free trade - but when Trump began to advocate protectionist policies, most Republicans carried on backing him, without even seeming to notice the shift.


Saturday, December 19, 2020

Robots at work: People prefer—and forgive—service robots with perceived feelings

Yam, K. C, Bingman, Y. E. et. al.
Journal of Applied Psychology. 
Advance online publication. 

Abstract

Organizations are increasingly relying on service robots to improve efficiency, but these robots often make mistakes, which can aggravate customers and negatively affect organizations. How can organizations mitigate the frontline impact of these robotic blunders? Drawing from theories of anthropomorphism and mind perception, we propose that people evaluate service robots more positively when they are anthropomorphized and seem more humanlike—capable of both agency (the ability to think) and experience (the ability to feel). We further propose that in the face of robot service failures, increased perceptions of experience should attenuate the negative effects of service failures, whereas increased perceptions of agency should amplify the negative effects of service failures on customer satisfaction. In a field study conducted in the world’s first robot-staffed hotel (Study 1), we find that anthropomorphism generally leads to higher customer satisfaction and that perceived experience, but not agency, mediates this effect. Perceived experience (but not agency) also interacts with robot service failures to predict customer satisfaction such that high levels of perceived experience attenuate the negative impacts of service failures on customer satisfaction. We replicate these results in a lab experiment with a service robot (Study 2). Theoretical and practical implications are discussed.

From Practical Contributions

Second, our findings also suggest that organizations should focus on encouraging perceptions of service robots’ experience rather than agency. For example, when assigning names to robots or programming robots’ voices, a female name and voice could potentially lead to enhanced perceptions of experience more so than a male name and voice (Gray et al., 2007). Likewise, service robots’ programmed scripts should include content that conveys the capacity of experience, such as displaying emotions. Although
the emerging service robotic technologies are not perfect and failures are inevitable, encouraging anthropomorphism and, more specifically, perceptions of experience can likely offset the negative effects of robot service failures.

Friday, December 18, 2020

Are Free Will Believers Nicer People? (Four Studies Suggest Not)

Crone DL, & Levy NL. 
Social Psychological and 
Personality Science. 2019;10(5):612-619. 
doi:10.1177/1948550618780732

Abstract

Free will is widely considered a foundational component of Western moral and legal codes, and yet current conceptions of free will are widely thought to fit uncomfortably with much research in psychology and neuroscience. Recent research investigating the consequences of laypeople’s free will beliefs (FWBs) for everyday moral behavior suggests that stronger FWBs are associated with various desirable moral characteristics (e.g., greater helpfulness, less dishonesty). These findings have sparked concern regarding the potential for moral degeneration throughout society as science promotes a view of human behavior that is widely perceived to undermine the notion of free will. We report four studies (combined N = 921) originally concerned with possible mediators and/or moderators of the abovementioned associations. Unexpectedly, we found no association between FWBs and moral behavior. Our findings suggest that the FWB–moral behavior association (and accompanying concerns regarding decreases in FWBs causing moral degeneration) may be overstated.

(Bold added by me.)

Thursday, December 17, 2020

AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust

Capgemini Research Institute

In the wake of the COVID-19 crisis, our reliance on AI has skyrocketed. Today more than ever before, we look to AI to help us limit physical interactions, predict the next wave of the pandemic, disinfect our healthcare facilities and even deliver our food. But can we trust it?

In the latest report from the Capgemini Research Institute – AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust – we surveyed over 800 organizations and 2,900 consumers to get a picture of the state of ethics in AI today. We wanted to understand what organizations can do to move to AI systems that are ethical by design, how they can benefit from doing so, and the consequences if they don’t. We found that while customers are becoming more trusting of AI-enabled interactions, organizations’ progress in ethical dimensions is underwhelming. And this is dangerous because once violated, trust can be difficult to rebuild.

Ethically sound AI requires a strong foundation of leadership, governance, and internal practices around audits, training, and operationalization of ethics. Building on this foundation, organizations have to:
  1. Clearly outline the intended purpose of AI systems and assess their overall potential impact
  2. Proactively deploy AI to achieve sustainability goals
  3. Embed diversity and inclusion principles proactively throughout the lifecycle of AI systems for advancing fairness
  4. Enhance transparency with the help of technology tools, humanize the AI experience and ensure human oversight of AI systems
  5. Ensure technological robustness of AI systems
  6. Empower customers with privacy controls to put them in charge of AI interactions.
For more information on ethics in AI, download the report.

Wednesday, December 16, 2020

If a robot is conscious, is it OK to turn it off? The moral implications of building true AIs

Anand Vaidya
The Conversation
Originally posted 27 Oct 20

Here is an excerpt:

There are two parts to consciousness. First, there’s the what-it’s-like-for-me aspect of an experience, the sensory part of consciousness. Philosophers call this phenomenal consciousness. It’s about how you experience a phenomenon, like smelling a rose or feeling pain.

In contrast, there’s also access consciousness. That’s the ability to report, reason, behave and act in a coordinated and responsive manner to stimuli based on goals. For example, when I pass the soccer ball to my friend making a play on the goal, I am responding to visual stimuli, acting from prior training, and pursuing a goal determined by the rules of the game. I make the pass automatically, without conscious deliberation, in the flow of the game.

Blindsight nicely illustrates the difference between the two types of consciousness. Someone with this neurological condition might report, for example, that they cannot see anything in the left side of their visual field. But if asked to pick up a pen from an array of objects in the left side of their visual field, they can reliably do so. They cannot see the pen, yet they can pick it up when prompted – an example of access consciousness without phenomenal consciousness.

Data is an android. How do these distinctions play out with respect to him?

The Data dilemma

The android Data demonstrates that he is self-aware in that he can monitor whether or not, for example, he is optimally charged or there is internal damage to his robotic arm.

Data is also intelligent in the general sense. He does a lot of distinct things at a high level of mastery. He can fly the Enterprise, take orders from Captain Picard and reason with him about the best path to take.

He can also play poker with his shipmates, cook, discuss topical issues with close friends, fight with enemies on alien planets and engage in various forms of physical labor. Data has access consciousness. He would clearly pass the Turing test.

However, Data most likely lacks phenomenal consciousness - he does not, for example, delight in the scent of roses or experience pain. He embodies a supersized version of blindsight. He’s self-aware and has access consciousness – can grab the pen – but across all his senses he lacks phenomenal consciousness.