Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, October 18, 2023

Responsible Agency and the Importance of Moral Audience

Jefferson, A., & Sifferd, K. 
Ethic Theory Moral Prac 26, 361–375 (2023).

Abstract

Ecological accounts of responsible agency claim that moral feedback is essential to the reasons-responsiveness of agents. In this paper, we discuss McGeer’s scaffolded reasons-responsiveness account in the light of two concerns. The first is that some agents may be less attuned to feedback from their social environment but are nevertheless morally responsible agents – for example, autistic people. The second is that moral audiences can actually work to undermine reasons-responsiveness if they espouse the wrong values. We argue that McGeer’s account can be modified to handle both problems. Once we understand the specific roles that moral feedback plays for recognizing and acting on moral reasons, we can see that autistics frequently do rely on such feedback, although it often needs to be more explicit. Furthermore, although McGeer is correct to highlight the importance of moral feedback, audience sensitivity is not all that matters to reasons-responsiveness; it needs to be tempered by a consistent application of moral rules. Agents also need to make sure that they choose their moral audiences carefully, paying special attention to receiving feedback from audiences which may be adversely affected by their actions.


Here is my take:

Responsible agency is the ability to act on the right moral reasons, even when it is difficult or costly. Moral audience is the group of people whose moral opinions we care about and respect.

According to the authors, moral audience plays a crucial role in responsible agency in two ways:
  1. It helps us to identify and internalize the right moral reasons. We learn about morality from our moral audience, and we are more likely to act on moral reasons if we know that our audience would approve of our actions.
  2. It provides us with motivation to act on moral reasons. We are more likely to do the right thing if we know that our moral audience will be disappointed in us if we don't.
The authors argue that moral audience is particularly important for responsible agency in novel contexts, where we may not have clear guidance from existing moral rules or norms. In these situations, we need to rely on our moral audience to help us to identify and act on the right moral reasons.

The authors also discuss some of the challenges that can arise when we are trying to identify and act on the right moral reasons. For example, our moral audience may have different moral views than we do, or they may be biased in some way. In these cases, we need to be able to critically evaluate our moral audience's views and make our own judgments about what is right and wrong.

Overall, the article makes a strong case for the importance of moral audience in developing and maintaining responsible agency. It is important to have a group of people whose moral opinions we care about and respect, and to be open to their feedback. This can help us to become more morally responsible agents.

Tuesday, October 17, 2023

Tackling healthcare AI's bias, regulatory and inventorship challenges

Bill Siwicki
Healthcare IT News
Originally posted 29 August 23

While AI adoption is increasing in healthcare, there are privacy and content risks that come with technology advancements.

Healthcare organizations, according to Dr. Terri Shieh-Newton, an immunologist and a member at global law firm Mintz, must have an approach to AI that best positions themselves for growth, including managing:
  • Biases introduced by AI. Provider organizations must be mindful of how machine learning is integrating racial diversity, gender and genetics into practice to support the best outcome for patients.
  • Inventorship claims on intellectual property. Identifying ownership of IP as AI begins to develop solutions in a faster, smarter way compared to humans.
Healthcare IT News sat down with Shieh-Newton to discuss these issues, as well as the regulatory landscape’s response to data and how that impacts AI.

Q. Please describe the generative AI challenge with biases introduced from AI itself. How is machine learning integrating racial diversity, gender and genetics into practice?
A. Generative AI is a type of machine learning that can create new content based on the training of existing data. But what happens when that training set comes from data that has inherent bias? Biases can appear in many forms within AI, starting from the training set of data.

Take, as an example, a training set of patient samples already biased if the samples are collected from a non-diverse population. If this training set is used for discovering a new drug, then the outcome of the generative AI model can be a drug that works only in a subset of a population – or have just a partial functionality.

Some traits of novel drugs are better binding to its target and lower toxicity. If the training set excludes a population of patients of a certain gender or race (and the genetic differences that are inherent therein), then the outcome of proposed drug compounds is not as robust as when the training sets include a diversity of data.

This leads into questions of ethics and policies, where the most marginalized population of patients who need the most help could be the group that is excluded from the solution because they were not included in the underlying data used by the generative AI model to discover that new drug.

One can address this issue with more deliberate curation of the training databases. For example, is the patient population inclusive of many types of racial backgrounds? Gender? Age ranges?

By making sure there is a reasonable representation of gender, race and genetics included in the initial training set, generative AI models can accelerate drug discovery, for example, in a way that benefits most of the population.

The info is here. 

Here is my take:

 One of the biggest challenges is bias. AI systems are trained on data, and if that data is biased, the AI system will be biased as well. This can have serious consequences in healthcare, where biased AI systems could lead to patients receiving different levels of care or being denied care altogether.

Another challenge is regulation. Healthcare is a highly regulated industry, and AI systems need to comply with a variety of laws and regulations. This can be complex and time-consuming, and it can be difficult for healthcare organizations to keep up with the latest changes.

Finally, the article discusses the challenges of inventorship. As AI systems become more sophisticated, it can be difficult to determine who is the inventor of a new AI-powered healthcare solution. This can lead to disputes and delays in bringing new products and services to market.

The article concludes by offering some suggestions for how to address these challenges:
  • To reduce bias, healthcare organizations need to be mindful of the data they are using to train their AI systems. They should also audit their AI systems regularly to identify and address any bias.
  • To comply with regulations, healthcare organizations need to work with experts to ensure that their AI systems meet all applicable requirements.
  • To resolve inventorship disputes, healthcare organizations should develop clear policies and procedures for allocating intellectual property rights.
By addressing these challenges, healthcare organizations can ensure that AI is deployed in a way that is safe, effective, and ethical.

Additional thoughts

In addition to the challenges discussed in the article, there are a number of other factors that need to be considered when deploying AI in healthcare. For example, it is important to ensure that AI systems are transparent and accountable. This means that healthcare organizations should be able to explain how their AI systems work and why they make the decisions they do.

It is also important to ensure that AI systems are fair and equitable. This means that they should treat all patients equally, regardless of their race, ethnicity, gender, income, or other factors.

Finally, it is important to ensure that AI systems are used in a way that respects patient privacy and confidentiality. This means that healthcare organizations should have clear policies in place for the collection, use, and storage of patient data.

By carefully considering all of these factors, healthcare organizations can ensure that AI is used to improve patient care and outcomes in a responsible and ethical way.

Monday, October 16, 2023

Why Every Leader Needs to Worry About Toxic Culture

D. Sull, W. Cipolli, & C. Brighenti
MIT Sloan Management Review
Originally posted 16 March 22

Here is an excerpt:

The High Costs of a Toxic Culture

By identifying the core elements of a toxic culture, we can synthesize existing research on closely related topics, including discrimination, abusive managers, unethical organizational behavior, workplace injustice, and incivility. This research allows us to tally the full cost of a toxic culture to individuals and organizations. And the toll, in human suffering and financial expenses, is staggering.

A large body of research shows that working in a toxic atmosphere is associated with elevated levels of stress, burnout, and mental health issues. Toxicity also translates into physical illness. When employees experience injustice in the workplace, their odds of suffering a major disease (including coronary disease, asthma, diabetes, and arthritis) increase by 35% to 55%.

In addition to the pain imposed on employees, a toxic culture also imposes costs that flow directly to the organization’s bottom line. When a toxic atmosphere makes workers sick, for example, their employer typically foots the bill. Among U.S. workers with health benefits, two-thirds have their health care expenses paid directly by their employer. By one estimate, toxic workplaces added an incremental $16 billion in employee health care costs in 2008. The figure below summarizes some of the costs of a toxic culture for organizations.

According to a study from the Society for Human Resource Management, 1 in 5 employees left a job at some point in their career because of its toxic culture. That survey, conducted before the pandemic, is consistent with our findings that a toxic culture is the best predictor of a company experiencing higher employee attrition than its industry overall during the first six months of the Great Resignation. Gallup estimates that the cost of replacing an employee who quits can total up to two times their annual salary when all direct and indirect expenses are accounted for.

Companies with a toxic culture will not only lose employees — they’ll also struggle to replace workers who jump ship. Over three-quarters of job seekers research an employer’s culture before applying for a job. In an age of online employee reviews, companies cannot keep their culture problems a secret for long, and a toxic culture, as we showed above, is by far the strongest predictor of a low review on Glassdoor. Having a toxic employer brand makes it harder to attract candidates.


Here is my take:

The article identifies five attributes that have a disproportionate impact on how workers view a toxic culture:
  • Disrespectful
  • Noninclusive
  • Unethical
  • Cutthroat
  • Abusive
Leaders play a pivotal role in shaping and maintaining a positive work culture. They must be aware of the impact of toxic culture and actively work towards building a healthy and supportive environment.

To tackle toxic culture, leaders must first identify the behaviors and practices that contribute to it. Common toxic behaviors include micromanagement, lack of transparency, favoritism, excessive competition, and poor communication. Once the root causes of the problem have been identified, leaders can develop strategies to address them.

The article provides a number of recommendations for leaders to create a positive work culture, including:
  • Setting clear expectations for behavior and holding employees accountable
  • Fostering a culture of trust and respect
  • Promoting diversity and inclusion
  • Providing employees with opportunities for growth and development
  • Creating a work-life balance
  • Leaders who are committed to creating a positive work culture will see the benefits reflected in their team's performance and the organization's bottom line.

Sunday, October 15, 2023

Bullshit blind spots: the roles of miscalibration and information processing in bullshit detection

Shane Littrell & Jonathan A. Fugelsang
(2023) Thinking & Reasoning
DOI: 10.1080/13546783.2023.2189163

Abstract

The growing prevalence of misleading information (i.e., bullshit) in society carries with it an increased need to understand the processes underlying many people’s susceptibility to falling for it. Here we report two studies (N = 412) examining the associations between one’s ability to detect pseudo-profound bullshit, confidence in one’s bullshit detection abilities, and the metacognitive experience of evaluating potentially misleading information. We find that people with the lowest (highest) bullshit detection performance overestimate (underestimate) their detection abilities and overplace (underplace) those abilities when compared to others. Additionally, people reported using both intuitive and reflective thinking processes when evaluating misleading information. Taken together, these results show that both highly bullshit-receptive and highly bullshit-resistant people are largely unaware of the extent to which they can detect bullshit and that traditional miserly processing explanations of receptivity to misleading information may be insufficient to fully account for these effects.


Here's my summary:

The authors of the article argue that people have two main blind spots when it comes to detecting bullshit: miscalibration and information processing. Miscalibration is the tendency to overestimate our ability to detect bullshit. We think we're better at detecting bullshit than we actually are.

Information processing is the way that we process information in order to make judgments. The authors argue that we are more likely to be fooled by bullshit when we are not paying close attention or when we are processing information quickly.

The authors also discuss some strategies for overcoming these blind spots. One strategy is to be aware of our own biases and limitations. We should also be critical of the information that we consume and take the time to evaluate evidence carefully.

Overall, the article provides a helpful framework for understanding the challenges of bullshit detection. It also offers some practical advice for overcoming these challenges.

Here are some additional tips for detecting bullshit:
  • Be skeptical of claims that seem too good to be true.
  • Look for evidence to support the claims that are being made.
  • Be aware of the speaker or writer's motives.
  • Ask yourself if the claims are making sense and whether they are consistent with what you already know.
  • If you're not sure whether something is bullshit, it's better to err on the side of caution and be skeptical.

Saturday, October 14, 2023

Overconfidently conspiratorial: Conspiracy believers are dispositionally overconfident and massively overestimate how much others agree with them

Pennycook, G., Binnendyk, J., & Rand, D. G. 
(2022, December 5). PsyArXiv

Abstract

There is a pressing need to understand belief in false conspiracies. Past work has focused on the needs and motivations of conspiracy believers, as well as the role of overreliance on intuition. Here, we propose an alternative driver of belief in conspiracies: overconfidence. Across eight studies with 4,181 U.S. adults, conspiracy believers not only relied more intuition, but also overestimated their performance on numeracy and perception tests (i.e. were overconfident in their own abilities). This relationship with overconfidence was robust to controlling for analytic thinking, need for uniqueness, and narcissism, and was strongest for the most fringe conspiracies. We also found that conspiracy believers – particularly overconfident ones – massively overestimated (>4x) how much others agree with them: Although conspiracy beliefs were in the majority in only 12% of 150 conspiracies across three studies, conspiracy believers thought themselves to be in the majority 93% of the time.

Here is my summary:

The research found that people who believe in conspiracy theories are more likely to be overconfident in their own abilities and to overestimate how much others agree with them. This was true even when controlling for other factors, such as analytic thinking, need for uniqueness, and narcissism.

The researchers conducted a series of studies to test their hypothesis. In one study, they found that people who believed in conspiracy theories were more likely to overestimate their performance on numeracy and perception tests. In another study, they found that people who believed in conspiracy theories were more likely to overestimate how much others agreed with them about a variety of topics, including climate change and the 2016 US presidential election.

The researchers suggest that overconfidence may play a role in the formation and maintenance of conspiracy beliefs. When people are overconfident, they are more likely to dismiss evidence that contradicts their beliefs and to seek out information that confirms their beliefs. This can lead to a "filter bubble" effect, where people are only exposed to information that reinforces their existing beliefs.

The researchers also suggest that overconfidence may lead people to overestimate how much others agree with them about their conspiracy beliefs. This can make them feel more confident in their beliefs and less likely to question them.

The findings of this research have implications for understanding and addressing the spread of conspiracy theories. It is important to be aware of the role that overconfidence may play in the formation and maintenance of conspiracy beliefs. This knowledge can be used to develop more effective interventions to prevent people from falling for conspiracy theories and to help people who already believe in conspiracy theories to critically evaluate their beliefs.

Friday, October 13, 2023

Humans Have Crossed 6 of 9 ‘Planetary Boundaries’

Meghan Bartles
Scientific American
Originally posted 13 September 23

Here is an excerpt:

The new study marks the second update since the 2009 paper and the first time scientists have included numerical guideposts for each boundary—a very significant development. “What is novel about this paper is: it’s the first time that all nine boundaries have been quantified,” says Rak Kim, an environmental social scientist at Utrecht University in the Netherlands, who wasn’t involved in the new study.

Since its initial presentation, the planetary boundaries model has drawn praise for presenting the various intertwined factors—beyond climate change alone—that influence Earth’s habitability. Carbon dioxide levels are included in the framework, of course, but so are biodiversity loss, chemical pollution, changes in the use of land and fresh water and the presence of the crucial elements nitrogen and phosphorus. None of these boundaries stands in isolation; for example, land use changes can affect biodiversity, and carbon dioxide affects ocean acidification, among other connections.

“It’s very easy to think about: there are eight, nine boundaries—but I think it’s a challenge to explain to people how these things interact,” says political scientist Victor Galaz of the Stockholm Resilience Center, a joint initiative of Stockholm University and the Beijer Institute of Ecological Economics at the Royal Swedish Academy of Sciences, who focuses on climate governance and wasn’t involved in the new research. “You pull on one end, and actually you’re affecting something else. And I don’t think people really understand that.”

Although the nine overall factors themselves are the same as those first identified in the 2009 paper, researchers on the projects have fine-tuned some of these boundaries’ details. “This most recent iteration has done a very nice job of fleshing out more and more data—and, more and more quantitatively, where we sit with respect to those boundaries,” says Jonathan Foley, executive director of Project Drawdown, a nonprofit organization that develops roadmaps for climate solutions. Foley was a co-author on the original 2009 paper but was not involved in the new research.

Still, the overall verdict remains the same as it was nearly 15 years ago. “It’s pretty alarming: We’re living on a planet unlike anything any humans have seen before,” Foley says. (Humans are also struggling to meet the United Nations’ 17 Sustainable Development Goals, which are designed to address environmental and societal challenges, such as hunger and gender inequality, in tandem.)


Here is my summary:

Planetary boundaries are the limits within which humanity can operate without causing irreversible damage to the Earth's ecosystems. The six boundaries that have been crossed are:
  • Climate change
  • Biosphere integrity
  • Land use and system change
  • Nitrogen and phosphorus flows
  • Freshwater use
  • Atmospheric aerosol loading
The study found that these boundaries have been crossed due to a combination of factors, including population growth, economic development, and unsustainable consumption patterns. The authors of the study warn that crossing these planetary boundaries could have serious consequences for human health and well-being.

The article also discusses the implications of the study's findings for policymakers and businesses. The authors argue that we need to make a fundamental shift in the way we live and produce goods and services in order to stay within the planetary boundaries. This will require investments in renewable energy, sustainable agriculture, and other technologies that can help us to decouple economic growth from environmental damage.

Overall, the article provides a sobering assessment of the state of the planet. It is clear that we need to take urgent action to address the environmental challenges that we face.

Thursday, October 12, 2023

Patients need doctors who look like them. Can medicine diversify without affirmative action?

Kat Stafford
apnews.com
Originally posted 11 September 23

Here are two excerpts:

But more than two months after the Supreme Court struck down affirmative action in college admissions, concerns have arisen that a path into medicine may become much harder for students of color. Heightening the alarm: the medical field’s reckoning with longstanding health inequities.

Black Americans represent 13% of the U.S. population, yet just 6% of U.S. physicians are Black. Increasing representation among doctors is one solution experts believe could help disrupt health inequities.

The disparities stretch from birth to death, often beginning before Black babies take their first breath, a recent Associated Press series showed. Over and over, patients said their concerns were brushed aside or ignored, in part because of unchecked bias and racism within the medical system and a lack of representative care.

A UCLA study found the percentage of Black doctors had increased just 4% from 1900 to 2018.

But the affirmative action ruling dealt a “serious blow” to the medical field’s goals of improving that figure, the American Medical Association said, by prohibiting medical schools from considering race among many factors in admissions. The ruling, the AMA said, “will reverse gains made in the battle against health inequities.”

The consequences could affect Black health for generations to come, said Dr. Uché Blackstock, a New York emergency room physician and author of “LEGACY: A Black Physician Reckons with Racism in Medicine.”

(cut)

“As medical professionals, any time we see disparities in care or outcomes of any kind, we have to look at the systems in which we are delivering care and we have to look at ways that we are falling short,” Wysong said.

Without affirmative action as a tool, career programs focused on engaging people of color could grow in importance.

For instance, the Pathways initiative engages students from Black, Latino and Indigenous communities from high school through medical school.

The program starts with building interest in dermatology as a career and continues to scholarships, workshops and mentorship programs. The goal: Increase the number of underrepresented dermatology residents from about 100 in 2022 to 250 by 2027, and grow the share of dermatology faculty who are members of color by 2%.

Tolliver credits her success in becoming a dermatologist in part to a scholarship she received through Ohio State University’s Young Scholars Program, which helps talented, first-generation Ohio students with financial need. The scholarship helped pave the way for medical school, but her involvement in the Pathways residency program also was central.

Wednesday, October 11, 2023

The Best-Case Heuristic: 4 Studies of Relative Optimism, Best-Case, Worst-Case, & Realistic Predictions in Relationships, Politics, & a Pandemic

Sjåstad, H., & Van Bavel, J. (2023).
Personality and Social Psychology Bulletin, 0(0).
https://doi.org/10.1177/01461672231191360

Abstract

In four experiments covering three different life domains, participants made future predictions in what they considered the most realistic scenario, an optimistic best-case scenario, or a pessimistic worst-case scenario (N = 2,900 Americans). Consistent with a best-case heuristic, participants made “realistic” predictions that were much closer to their best-case scenario than to their worst-case scenario. We found the same best-case asymmetry in health-related predictions during the COVID-19 pandemic, for romantic relationships, and a future presidential election. In a fully between-subject design (Experiment 4), realistic and best-case predictions were practically identical, and they were naturally made faster than the worst-case predictions. At least in the current study domains, the findings suggest that people generate “realistic” predictions by leaning toward their best-case scenario and largely ignoring their worst-case scenario. Although political conservatism was correlated with lower covid-related risk perception and lower support of early public-health interventions, the best-case prediction heuristic was ideologically symmetric.


Here is my summary:

This research examined how people make predictions about the future in different life domains, such as health, relationships, and politics. The researchers found that people tend to make predictions that are closer to their best-case scenario than to their worst-case scenario, even when asked to make a "realistic" prediction. This is known as the best-case heuristic.

The researchers conducted four experiments to test the best-case heuristic. In the first experiment, participants were asked to make predictions about their risk of getting COVID-19, their satisfaction with their romantic relationship in one year, and the outcome of the next presidential election. Participants were asked to make three predictions for each event: a best-case scenario, a worst-case scenario, and a realistic scenario. The results showed that participants' "realistic" predictions were much closer to their best-case predictions than to their worst-case predictions.

The researchers found the same best-case asymmetry in the other three experiments, which covered a variety of life domains, including health, relationships, and politics. The findings suggest that people use a best-case heuristic when making predictions about the future, even in serious and important matters.

The best-case heuristic has several implications for individuals and society. On the one hand, it can help people to maintain a positive outlook on life and to cope with difficult challenges. On the other hand, it can also lead to unrealistic expectations and to a failure to plan for potential problems.

Overall, the research on the best-case heuristic suggests that people's predictions about the future are often biased towards optimism. This is something to be aware of when making important decisions and when planning for the future.

Tuesday, October 10, 2023

The Moral Case for No Longer Engaging With Elon Musk’s X

David Lee
Bloomberg.com
Originally published 5 October 23

Here is an excerpt:

Social networks are molded by the incentives presented to users. In the same way we can encourage people to buy greener cars with subsidies or promote healthy living by giving out smartwatches, so, too, can levers be pulled to improve the health of online life. Online, people can’t be told what to post, but sites can try to nudge them toward behaving in a certain manner, whether through design choices or reward mechanisms.

Under the previous management, Twitter at least paid lip service to this. In 2020, it introduced a feature that encouraged people to actually read articles before retweeting them, for instance, to promote “informed discussion.” Jack Dorsey, the co-founder and former chief executive officer, claimed to be thinking deeply about improving the quality of conversations on the platform — seeking ways to better measure and improve good discourse online. Another experiment was hiding the “likes” count in an attempt to train away our brain’s yearn for the dopamine hit we get from social engagement.

One thing the prior Twitter management didn’t do is actively make things worse. When Musk introduced creator payments in July, he splashed rocket fuel over the darkest elements of the platform. These kinds of posts always existed, in no small number, but are now the despicable main event. There’s money to be made. X’s new incentive structure has turned the site into a hive of so-called engagement farming — posts designed with the sole intent to elicit literally any kind of response: laughter, sadness, fear. Or the best one: hate. Hate is what truly juices the numbers.

The user who shared the video of Carson’s attack wasn’t the only one to do it. But his track record on these kinds of posts, and the inflammatory language, primed it to be boosted by the algorithm. By Tuesday, the user was still at it, making jokes about Carson’s girlfriend. All content monetized by advertising, which X desperately needs. It’s no mistake, and the user’s no fringe figure. In July, he posted that the site had paid him more than $16,000. Musk interacts with him often.


Here's my take: 

Lee pointed out that social networks can shape user behavior through incentives, and the previous management of Twitter had made some efforts to promote healthier online interactions. However, under Elon Musk's management, the platform has taken a different direction, actively encouraging provocative and hateful content to boost engagement.

Lee criticized the new incentive structure on X, where users are financially rewarded for producing controversial content. They argued that as the competition for attention intensifies, the content will likely become more violent and divisive.

Lee also mentioned an incident involving former executive Yoel Roth, who raised concerns about hate speech on the platform, and Musk's dismissive response to those concerns.  Musk is not a business genius and does not understand how to promote a healthy social media site.