Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Inclusion. Show all posts
Showing posts with label Inclusion. Show all posts

Thursday, December 28, 2023

The Relative Importance of Target and Judge Characteristics in Shaping the Moral Circle

Jaeger, B., & Wilks, M. (2021). 
Cognitive Science. 

Abstract

People's treatment of others (humans, nonhuman animals, or other entities) often depends on whether they think the entity is worthy of moral concern. Recent work has begun to investigate which entities are included in a person's moral circle, examining how certain target characteristics (e.g., species category, perceived intelligence) and judge characteristics (e.g., empathy, political orientation) shape moral inclusion. However, the relative importance of target and judge characteristics in predicting moral inclusion remains unclear. When predicting whether a person will deem an entity worthy of moral consideration, how important is it to know who is making the judgment (i.e., characteristics of the judge), who is being judged (i.e., characteristics of the target), and potential interactions between the two factors? Here, we address this foundational question by conducting a variance component analysis of the moral circle. In two studies with participants from the Netherlands, the United States, the United Kingdom, and Australia (N = 836), we test how much variance in judgments of moral concern is explained by between-target differences, between-judge differences, and by the interaction between the two factors. We consistently find that all three components explain substantial amounts of variance in judgments of moral concern. Our findings provide two important insights. First, an increased focus on interactions between target and judge characteristics is needed, as these interactions explain as much variance as target and judge characteristics separately. Second, any theoretical account that aims to provide an accurate description of moral inclusion needs to consider target characteristics, judge characteristics, and their interaction.

Here is my take:

The authors begin by reviewing the literature on the moral circle, which is the group of beings that people believe are worthy of moral consideration. They note that both target characteristics (e.g., species category, perceived intelligence) and judge characteristics (e.g., empathy, political orientation) have been shown to influence moral inclusion. However, the relative importance of these two types of characteristics remains unclear.

To address this question, the authors conducted two studies with participants from the Netherlands, the United States, the United Kingdom, and Australia. In each study, participants were asked to rate how much moral concern they felt for a variety of targets, including humans, animals, and robots. Participants were also asked to complete a questionnaire about their own moral values and beliefs.

The authors' analysis revealed that both target and judge characteristics explained significant amounts of variance in judgments of moral concern. However, they also found that the interaction between target and judge characteristics was just as important as target and judge characteristics separately. This means that the moral circle is not simply a function of either target or judge characteristics, but rather of the complex interaction between the two.

The authors' findings have important implications for our understanding of the moral circle. They show that moral inclusion is not simply a matter of whether or not a target possesses certain characteristics (e.g., sentience, intelligence). Rather, it also depends on the characteristics of the judge, as well as the interaction between the two.

The authors' findings also have important implications for applied ethics. For example, they suggest that ethicists should be careful to avoid making generalizations about the moral status of entire groups of beings. Instead, they should consider the individual characteristics of both the target and the judge when making moral judgments.

Tuesday, December 19, 2023

Human bias in algorithm design

Morewedge, C.K., Mullainathan, S., Naushan, H.F. et al.
Nat Hum Behav 7, 1822–1824 (2023).

Here is how the article starts:

Algorithms are designed to learn user preferences by observing user behaviour. This causes algorithms to fail to reflect user preferences when psychological biases affect user decision making. For algorithms to enhance social welfare, algorithm design needs to be psychologically informed.Many people believe that algorithms are failing to live up to their prom-ise to reflect user preferences and improve social welfare. The problem is not technological. Modern algorithms are sophisticated and accurate. Training algorithms on unrepresentative samples contributes to the problem, but failures happen even when algorithms are trained on the population. Nor is the problem caused only by the profit motive. For-profit firms design algorithms at a cost to users, but even non-profit organizations and governments fall short.

All algorithms are built on a psychological model of what the user is doing. The fundamental constraint on this model is the narrowness of the measurable variables for algorithms to predict. We suggest that algorithms fail to reflect user preferences and enhance their welfare because algorithms rely on revealed preferences to make predictions. Designers build algorithms with the erroneous assumption that user behaviour (revealed preferences) tells us (1) what users rationally prefer (normative preferences) and (2) what will enhance user welfare. Reliance on this 95-year-old economic model, rather than the more realistic assumption that users exhibit bounded rationality, leads designers to train algorithms on user behaviour. Revealed preferences can identify unknown preferences, but revealed preferences are an incomplete — and at times misleading — measure of the normative preferences and values of users. It is ironic that modern algorithms are built on an outmoded and indefensible commitment to revealed preferences.


Here is my summary.

Human biases can be reflected in algorithms, leading to unintended discriminatory outcomes. The authors argue that algorithms are not simply objective tools, but rather embody the values and assumptions of their creators. They highlight the importance of considering psychological factors when designing algorithms, as human behavior is often influenced by biases. To address this issue, the authors propose a framework for developing psychologically informed algorithms that can better capture user preferences and enhance social welfare. They emphasize the need for a more holistic approach to algorithm design that goes beyond technical considerations and takes into account the human element.

Sunday, October 22, 2023

What Is Psychological Safety?

Amy Gallo
Harvard Business Review
Originally posted 15 FEB 23

Here are two excerpts:

Why is psychological safety important?

First, psychological safety leads to team members feeling more engaged and motivated, because they feel that their contributions matter and that they’re able to speak up without fear of retribution. Second, it can lead to better decision-making, as people feel more comfortable voicing their opinions and concerns, which often leads to a more diverse range of perspectives being heard and considered. Third, it can foster a culture of continuous learning and improvement, as team members feel comfortable sharing their mistakes and learning from them. (This is what my boss was doing in the opening story.)

All of these benefits — the impact on a team’s performance, innovation, creativity, resilience, and learning — have been proven in research over the years, most notably in Edmondson’s original research and in a study done at Google. That research, known as Project Aristotle, aimed to understand the factors that impacted team effectiveness across Google. Using over 30 statistical models and hundreds of variables, that project concluded that who was on a team mattered less than how the team worked together. And the most important factor was psychological safety.

Further research has shown the incredible downsides of not having psychological safety, including negative impacts on employee well-being, including stress, burnout, and turnover, as well as on the overall performance of the organization.

(cut)

How do you create psychological safety?

Edmondson is quick to point out that “it’s more magic than science” and it’s important for managers to remember this is “a climate that we co-create, sometimes in mysterious ways.”

Anyone who has worked on a team marked by silence and the inability to speak up, knows how hard it is to reverse that.

A lot of what goes into creating a psychologically safe environment are good management practices — things like establishing clear norms and expectations so there is a sense of predictability and fairness; encouraging open communication and actively listening to employees; making sure team members feel supported; and showing appreciation and humility when people do speak up.

There are a few additional tactics that Edmondson points to as well.


Here are some of my thoughts about psychological safety:
  • It is not the same as comfort. It is okay to feel uncomfortable sometimes, as long as you feel safe to take risks and speak up.
  • It is not about being friends with everyone on your team. It is about creating a respectful and inclusive environment where everyone feels like they can belong.
  • It takes time and effort to build psychological safety. It is not something that happens overnight.

Thursday, June 29, 2023

Fairytales have always reflected the morals of the age. It’s not a sin to rewrite them

Martha Gill
The Guardian
Originally posted 4 June 23

Here are two excerpts:

General outrage greeted “woke” updates to Roald Dahl books this year, and still periodically erupts over Disney remakes, most recently a forthcoming film with a Latina actress as Snow White, and a new Peter Pan & Wendy with “lost girls”. The argument is that too much fashionable refurbishment tends to ruin a magical kingdom, and that cult classics could do with the sort of Grade I listing applied to heritage buildings. If you want to tell new stories, fine – but why not start from scratch?

But this point of view misses something, which is that updating classics is itself an ancient part of literary culture; in fact, it is a tradition, part of our heritage too. While the larger portion of the literary canon is carefully preserved, a slice of it has always been more flexible, to be retold and reshaped as times change.

Fairytales fit within this latter custom: they have been updated, periodically, for many hundreds of years. Cult figures such as Dracula, Frankenstein and Sherlock Holmes fit there too, as do superheroes: each generation, you might say, gets the heroes it deserves. And so does Bond. Modernity is both a villain and a hero within the Bond franchise: the dramatic tension between James – a young cosmopolitan “dinosaur” – and the passing of time has always been part of the fun.

This tradition has a richness to it: it is a historical record of sorts. Look at the progress of the fairy story through the ages and you get a twisty tale of dubious progress, a moral journey through the woods. You could say fairytales have always been politically correct – that is, tweaked to reflect whatever morals a given cohort of parents most wanted to teach their children.

(cut)

The idea that we are pasting over history – censoring important artefacts – is wrongheaded too. It is not as if old films or books have been burned, wiped from the internet or removed from libraries. With today’s propensity for writing things down, common since the 1500s, there is no reason to fear losing the “original” stories.

As for the suggestion that minority groups should make their own stories instead – this is a sly form of exclusion. Ancient universities and gentlemen’s clubs once made similar arguments; why couldn’t exiled individuals simply set up their own versions? It is not so easy. Old stories weave themselves deep into the tapestry of a nation; newer ones will necessarily be confined to the margins.


My take: Updating classic stories can be beneficial and even necessary to promote inclusion, diversity, equity, and fairness. By not updating these stories, we risk perpetuating harmful stereotypes and narratives that reinforce the dominant culture. When we update classic stories, we can create new possibilities for representation and understanding that can help to build a more just and equitable world.  Dominant cultures need to cede power to promote more unity in a multicultural nation.

Wednesday, October 12, 2022

Gender-diverse teams produce more novel and higher-impact scientific ideas

Yang, Y., Tian, T. Y., et al. (2022, August 29). 
Proceedings of the National Academy of Sciences, 119(36).
https://doi.org/10.1073/pnas.2200841119

Abstract

Science’s changing demographics raise new questions about research team diversity and research outcomes. We study mixed-gender research teams, examining 6.6 million papers published across the medical sciences since 2000 and establishing several core findings. First, the fraction of publications by mixed-gender teams has grown rapidly, yet mixed-gender teams continue to be underrepresented compared to the expectations of a null model. Second, despite their underrepresentation, the publications of mixed-gender teams are substantially more novel and impactful than the publications of same-gender teams of equivalent size. Third, the greater the gender balance on a team, the better the team scores on these performance measures. Fourth, these patterns generalize across medical subfields. Finally, the novelty and impact advantages seen with mixed-gender teams persist when considering numerous controls and potential related features, including fixed effects for the individual researchers, team structures, and network positioning, suggesting that a team’s gender balance is an underrecognized yet powerful correlate of novel and impactful scientific discoveries.

Significance

Science teams made up of men and women produce papers that are more novel and highly cited than those of all-men or all-women teams. These performance advantages increase the greater the team’s gender balance and appear nearly universal. On average, they hold for small and large teams, the 45 subfields of medicine, and women- or men-led teams and generalize to published papers in all science fields over the last 20 y. Notwithstanding these benefits, gender-diverse teams remain underrepresented in science when compared to what is expected if the teams in the data had been formed without regard to gender. These findings reveal potentially new gender and teamwork synergies that correlate with scientific discoveries and inform diversity, equity, and inclusion (DEI) initiatives.

Discussion

Conducting an analysis of 6.6 million published papers from more than 15,000 different medical journals worldwide, we find that mixed-gender teams—teams combining women and men scientists—produce more novel and more highly cited papers than all-women or all-men teams. Mixed-gender teams publish papers that are up to 7% more novel and 14.6% more likely to be upper-tail papers than papers published by same-gender teams, results that are robust to numerous institutional, team, and individual controls and further generalize by subfield. Finally, in exploring gender in science through the lens of teamwork, the results point to a potentially transformative approach for thinking about and capturing the value of gender diversity in science.

Another key finding of this work is that mixed-gender teams are significantly underrepresented compared to what would be expected by chance. This underrepresentation is all the more striking given the findings that gender-diverse teams produce more novel and high-impact research and suggests that gender-diverse teams may have substantial untapped potential for medical research. Nevertheless, the underrepresentation of gender-diverse teams may reflect research showing that women receive less credit for their successes than do men teammates, which in turn inhibits the formation of gender-diverse teams and women’s success in receiving grants, prizes, and promotions.

Thursday, May 27, 2021

How Adobe’s Ethics Committee Helps Manage AI Bias

Jared Council
The Wall Street Journal
Originally posted 5 May 21

Review boards can help companies mitigate some of the risks associated with using artificial intelligence, according to Adobe Inc. executive Dana Rao.

Mr. Rao, Adobe’s general counsel, said one of the top risks in using AI systems is that the technology can perpetuate harmful bias against certain demographics, based on what it learns from data. Ethics committees can be one way of managing those risks and putting organizational values into practice.

Adobe’s AI ethics committee, launched two years ago, has been able to review new features for potential bias before those features are deployed, Mr. Rao said Wednesday at The Wall Street Journal’s Risk & Compliance Forum. The committee is made up of employees of various ethnicities and genders from different parts of the company, including legal, government relations and marketing.

“It takes a lot of people across your company to help figure this out,” he said. “Sometimes we might look at it and say there’s not an issue here,” he said, but getting a diverse group of people together can help identify issues product developers might miss.

Thursday, March 12, 2020

Artificial Intelligence in Health Care

M. Matheny, D. Whicher, & S. Israni
JAMA. 2020;323(6):509-510.
doi:10.1001/jama.2019.21579

The promise of artificial intelligence (AI) in health care offers substantial opportunities to improve patient and clinical team outcomes, reduce costs, and influence population health. Current data generation greatly exceeds human cognitive capacity to effectively manage information, and AI is likely to have an important and complementary role to human cognition to support delivery of personalized health care.  For example, recent innovations in AI have shown high levels of accuracy in imaging and signal detection tasks and are considered among the most mature tools in this domain.

However, there are challenges in realizing the potential for AI in health care. Disconnects between reality and expectations have led to prior precipitous declines in use of the technology, termed AI winters, and another such event is possible, especially in health care.  Today, AI has outsized market expectations and technology sector investments. Current challenges include using biased data for AI model development, applying AI outside of populations represented in the training and validation data sets, disregarding the effects of possible unintended consequences on care or the patient-clinician relationship, and limited data about actual effects on patient outcomes and cost of care.

AI in Healthcare: The Hope, The Hype, The Promise, The Peril, a publication by the National Academy of Medicine (NAM), synthesizes current knowledge and offers a reference document for the responsible development, implementation, and maintenance of AI in the clinical enterprise.  The publication outlines current and near-term AI solutions; highlights the challenges, limitations, and best practices for AI development, adoption, and maintenance; presents an overview of the legal and regulatory landscape for health care AI; urges the prioritization of equity, inclusion, and a human rights lens for this work; and outlines considerations for moving forward. This Viewpoint shares highlights from the NAM publication.

The info is here.

Wednesday, March 27, 2019

The Value Of Ethics And Trust In Business.. With Artificial Intelligence

Stephen Ibaraki
Forbes.com
Originally posted March 2, 2019

Here is an excerpt:

Increasingly contributing positively to society and driving positive change are a growing discourse around the world and hitting all sectors and disruptive technologies such as Artificial Intelligence (AI).

With more than $20 Trillion USD wealth transfer from baby boomers to millennials, and their focus on the environment and social impact, this trend will accelerate. Business is aware and and taking the lead in this movement of advancing the human condition in a responsible and ethical manner. Values-based leadership, diversity, inclusion, investment and long-term commitment are the multi-stakeholder commitments going forward.

“Over the last 12 years, we have repeatedly seen that those companies who focus on transparency and authenticity are rewarded with the trust of their employees, their customers and their investors. While negative headlines might grab attention, the companies who support the rule of law and operate with decency and fair play around the globe will always succeed in the long term,” explained Ethisphere CEO, Timothy Erblich. “Congratulations to all of the 2018 honorees.”

The info is here.

Friday, February 22, 2019

Facebook Backs University AI Ethics Institute With $7.5 Million

Sam Shead
Forbes.com
Originally posted January 20, 2019

Facebook is backing an AI ethics institute at the Technical University of Munich with $7.5 million.

The TUM Institute for Ethics in Artificial Intelligence, which was announced on Sunday, will aim to explore fundamental issues affecting the use and impact of AI, Facebook said.

AI is poised to have a profound impact on areas like climate change and healthcare but it has its risks.

"We will explore the ethical issues of AI and develop ethical guidelines for the responsible use of the technology in society and the economy. Our evidence-based research will address issues that lie at the interface of technology and human values," said TUM Professor Dr. Christoph Lütge, who will lead the institute.

"Core questions arise around trust, privacy, fairness or inclusion, for example, when people leave data traces on the internet or receive certain information by way of algorithms. We will also deal with transparency and accountability, for example in medical treatment scenarios, or with rights and autonomy in human decision-making in situations of human-AI interaction."

The info is here.

Friday, September 14, 2018

What Are “Ethics in Design”?

Victoria Sgarro
slate.com
Originally posted August 13, 2018

Here is an excerpt:

As a product designer, I know that no mandate exists to integrate these ethical checks and balances in our process. While I may hear a lot of these issues raised at speaking events and industry meetups, more “practical” considerations can overshadow these conversations in my day-to-day decision making. When they have to compete with the workaday pressures of budgets, roadmaps, and clients, these questions won’t emerge as priorities organically.

Most important, then, is action. Castillo worries that the conversation about “ethics in design” could become a cliché, like “empathy” or “diversity” in tech, where it’s more talk than walk. She says it’s not surprising that ethics in tech hasn’t been addressed in depth in the past, given the industry’s lack of diversity. Because most tech employees come from socially privileged backgrounds, they may not be as attuned to ethical concerns. A designer who identifies with society’s dominant culture may have less personal need to take another perspective. Indeed, identification with a society’s majority is shown to be correlated with less critical awareness of the world outside of yourself. Castillo says that, as a black woman in America, she’s a bit wary of this conversation’s effectiveness if it remains only a conversation.

“You know how someone says, ‘Why’d you become a nurse or doctor?’ And they say, ‘I want to help people’?” asks Castillo. “Wouldn’t it be cool if someone says, ‘Why’d you become an engineer or a product designer?’ And you say, ‘I want to help people.’ ”

The info is here.

Tuesday, April 10, 2018

Should We Root for Robot Rights?

Evan Selinger
Medium.com
Originally posted February 15, 2018

Here is an excerpt:

Maybe there’s a better way forward — one where machines aren’t kept firmly in their machine-only place, humans don’t get wiped out Skynet-style, and our humanity isn’t sacrificed by giving robots a better deal.

While the legal challenges ahead may seem daunting, they pose enticing puzzles for many thoughtful legal minds, who are even now diligently embracing the task. Annual conferences like We Robot — to pick but one example — bring together the best and the brightest to imagine and propose creative regulatory frameworks that would impose accountability in various contexts on designers, insurers, sellers, and owners of autonomous systems.

From the application of centuries-old concepts like “agency” to designing cutting-edge concepts for drones and robots on the battlefield, these folks are ready to explore the hard problems of machines acting with varying shades of autonomy. For the foreseeable future, these legal theories will include clear lines of legal responsibility for the humans in the loop, particularly those who abuse technology either intentionally or though carelessness.

The social impacts of our seemingly insatiable need to interact with our devices have been drawing accelerated attention for at least a decade. From the American Academy of Pediatrics creating recommendations for limiting screen time to updating etiquette and social mores for devices while dining, we are attacking these problems through both institutional and cultural channels.

The article is here.