Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Bias. Show all posts
Showing posts with label Bias. Show all posts

Tuesday, December 19, 2023

Human bias in algorithm design

Morewedge, C.K., Mullainathan, S., Naushan, H.F. et al.
Nat Hum Behav 7, 1822–1824 (2023).

Here is how the article starts:

Algorithms are designed to learn user preferences by observing user behaviour. This causes algorithms to fail to reflect user preferences when psychological biases affect user decision making. For algorithms to enhance social welfare, algorithm design needs to be psychologically informed.Many people believe that algorithms are failing to live up to their prom-ise to reflect user preferences and improve social welfare. The problem is not technological. Modern algorithms are sophisticated and accurate. Training algorithms on unrepresentative samples contributes to the problem, but failures happen even when algorithms are trained on the population. Nor is the problem caused only by the profit motive. For-profit firms design algorithms at a cost to users, but even non-profit organizations and governments fall short.

All algorithms are built on a psychological model of what the user is doing. The fundamental constraint on this model is the narrowness of the measurable variables for algorithms to predict. We suggest that algorithms fail to reflect user preferences and enhance their welfare because algorithms rely on revealed preferences to make predictions. Designers build algorithms with the erroneous assumption that user behaviour (revealed preferences) tells us (1) what users rationally prefer (normative preferences) and (2) what will enhance user welfare. Reliance on this 95-year-old economic model, rather than the more realistic assumption that users exhibit bounded rationality, leads designers to train algorithms on user behaviour. Revealed preferences can identify unknown preferences, but revealed preferences are an incomplete — and at times misleading — measure of the normative preferences and values of users. It is ironic that modern algorithms are built on an outmoded and indefensible commitment to revealed preferences.


Here is my summary.

Human biases can be reflected in algorithms, leading to unintended discriminatory outcomes. The authors argue that algorithms are not simply objective tools, but rather embody the values and assumptions of their creators. They highlight the importance of considering psychological factors when designing algorithms, as human behavior is often influenced by biases. To address this issue, the authors propose a framework for developing psychologically informed algorithms that can better capture user preferences and enhance social welfare. They emphasize the need for a more holistic approach to algorithm design that goes beyond technical considerations and takes into account the human element.

Thursday, November 2, 2023

Doesn't everybody jaywalk? On codified rules that are seldom followed and selectively punished

Wylie, J., & Gantman, A. (2023).
Cognition, 231, 105323.
https://doi.org/10.1016/j.cognition.2022.105323
Abstract

Rules are meant to apply equally to all within their jurisdiction. However, some rules are frequently broken without consequence for most. These rules are only occasionally enforced, often at the discretion of a third-party observer. We propose that these rules—whose violations are frequent, and enforcement is rare—constitute a unique subclass of explicitly codified rules, which we call ‘phantom rules’ (e.g., proscribing jaywalking). Their apparent punishability is ambiguous and particularly susceptible to third-party motives. Across six experiments, (N = 1440) we validated the existence of phantom rules and found evidence for their motivated enforcement. First, people played a modified Dictator Game with a novel frequently broken and rarely enforced rule (i.e., a phantom rule). People enforced this rule more often when the “dictator” was selfish (vs. fair) even though the rule only proscribed fractional offers (not selfishness). Then we turned to third person judgments of the U.S. legal system. We found these violations are recognizable to participants as both illegal and commonplace (Experiment 2), differentiable from violations of prototypical laws (Experiments 3) and enforced in a motivated way (Experiments 4a and 4b). Phantom rule violations (but not prototypical legal violations) are seen as more justifiably punished when the rule violator has also violated a social norm (vs. rule violation alone)—unless the motivation to punish has been satiated (Experiment 5). Phantom rules are frequently broken, codified rules. Consequently, their apparent punishability is ambiguous, and their enforcement is particularly susceptible to third party motives.


Here's my quick summary: 

This research explores the concept of "phantom rules". Phantom rules are rules that are frequently broken without consequence for most, and are only occasionally enforced, often at the discretion of a third-party observer. Examples of phantom rules include jaywalking, speeding, and not coming to a complete stop at a stop sign.

The authors argue that phantom rules are a unique subclass of explicitly codified rules, and that they have a number of important implications for our understanding of law and society. For example, phantom rules can lead to people feeling like the law is unfair and that they are being targeted. They can also create a sense of lawlessness and disorder.

The authors conducted six experiments to investigate the psychological and social dynamics of phantom rules. They found evidence that people are more likely to punish violations of phantom rules when the violator has also violated a social norm. They also found that people are more likely to justify the selective enforcement of phantom rules when they believe that the violator is a deserving target.

The authors conclude by arguing that phantom rules are a significant social phenomenon with a number of important implications for law and society. They call for more research on the psychological and social dynamics of phantom rules, and on the impact of phantom rules on people's perceptions of the law and the criminal justice system.

Tuesday, October 17, 2023

Tackling healthcare AI's bias, regulatory and inventorship challenges

Bill Siwicki
Healthcare IT News
Originally posted 29 August 23

While AI adoption is increasing in healthcare, there are privacy and content risks that come with technology advancements.

Healthcare organizations, according to Dr. Terri Shieh-Newton, an immunologist and a member at global law firm Mintz, must have an approach to AI that best positions themselves for growth, including managing:
  • Biases introduced by AI. Provider organizations must be mindful of how machine learning is integrating racial diversity, gender and genetics into practice to support the best outcome for patients.
  • Inventorship claims on intellectual property. Identifying ownership of IP as AI begins to develop solutions in a faster, smarter way compared to humans.
Healthcare IT News sat down with Shieh-Newton to discuss these issues, as well as the regulatory landscape’s response to data and how that impacts AI.

Q. Please describe the generative AI challenge with biases introduced from AI itself. How is machine learning integrating racial diversity, gender and genetics into practice?
A. Generative AI is a type of machine learning that can create new content based on the training of existing data. But what happens when that training set comes from data that has inherent bias? Biases can appear in many forms within AI, starting from the training set of data.

Take, as an example, a training set of patient samples already biased if the samples are collected from a non-diverse population. If this training set is used for discovering a new drug, then the outcome of the generative AI model can be a drug that works only in a subset of a population – or have just a partial functionality.

Some traits of novel drugs are better binding to its target and lower toxicity. If the training set excludes a population of patients of a certain gender or race (and the genetic differences that are inherent therein), then the outcome of proposed drug compounds is not as robust as when the training sets include a diversity of data.

This leads into questions of ethics and policies, where the most marginalized population of patients who need the most help could be the group that is excluded from the solution because they were not included in the underlying data used by the generative AI model to discover that new drug.

One can address this issue with more deliberate curation of the training databases. For example, is the patient population inclusive of many types of racial backgrounds? Gender? Age ranges?

By making sure there is a reasonable representation of gender, race and genetics included in the initial training set, generative AI models can accelerate drug discovery, for example, in a way that benefits most of the population.

The info is here. 

Here is my take:

 One of the biggest challenges is bias. AI systems are trained on data, and if that data is biased, the AI system will be biased as well. This can have serious consequences in healthcare, where biased AI systems could lead to patients receiving different levels of care or being denied care altogether.

Another challenge is regulation. Healthcare is a highly regulated industry, and AI systems need to comply with a variety of laws and regulations. This can be complex and time-consuming, and it can be difficult for healthcare organizations to keep up with the latest changes.

Finally, the article discusses the challenges of inventorship. As AI systems become more sophisticated, it can be difficult to determine who is the inventor of a new AI-powered healthcare solution. This can lead to disputes and delays in bringing new products and services to market.

The article concludes by offering some suggestions for how to address these challenges:
  • To reduce bias, healthcare organizations need to be mindful of the data they are using to train their AI systems. They should also audit their AI systems regularly to identify and address any bias.
  • To comply with regulations, healthcare organizations need to work with experts to ensure that their AI systems meet all applicable requirements.
  • To resolve inventorship disputes, healthcare organizations should develop clear policies and procedures for allocating intellectual property rights.
By addressing these challenges, healthcare organizations can ensure that AI is deployed in a way that is safe, effective, and ethical.

Additional thoughts

In addition to the challenges discussed in the article, there are a number of other factors that need to be considered when deploying AI in healthcare. For example, it is important to ensure that AI systems are transparent and accountable. This means that healthcare organizations should be able to explain how their AI systems work and why they make the decisions they do.

It is also important to ensure that AI systems are fair and equitable. This means that they should treat all patients equally, regardless of their race, ethnicity, gender, income, or other factors.

Finally, it is important to ensure that AI systems are used in a way that respects patient privacy and confidentiality. This means that healthcare organizations should have clear policies in place for the collection, use, and storage of patient data.

By carefully considering all of these factors, healthcare organizations can ensure that AI is used to improve patient care and outcomes in a responsible and ethical way.

Sunday, May 28, 2023

Above the law? How motivated moral reasoning shapes evaluations of high performer unethicality

Campbell, E. M., Welsh, D. T., & Wang, W. (2023).
Journal of Applied Psychology.
Advance online publication.

Abstract

Recent revelations have brought to light the misconduct of high performers across various fields and occupations who were promoted up the organizational ladder rather than punished for their unethical behavior. Drawing on principles of motivated moral reasoning, we investigate how employee performance biases supervisors’ moral judgment of employee unethical behavior and how supervisors’ performance-focus shapes how they account for moral judgments in promotion recommendations. We test our model in three studies: a field study of 587 employees and their 124 supervisors at a Fortune 500 telecom company, an experiment with two samples of working adults, and an experiment that directly varied explanatory mechanisms. Evidence revealed a moral double standard such that supervisors rendered less punitive judgment of the unethical acts of higher performing employees. In turn, supervisors’ bottom-line mentality (i.e., fixation on achieving results) influenced the degree to which they incorporated their punitive judgments into promotability considerations. By revealing the moral leniency afforded to higher performers and the uneven consequences meted out by supervisors, our results carry implications for behavioral ethics research and for organizations seeking to retain and promote their higher performers while also maintaining ethical standards that are applied fairly across employees.

Here is the opening:

Allegations of unethical conduct perpetrated by prominent, high-performing professionals have been exploding across newsfeeds (Zacharek et al., 2017). From customer service employees and their managers (e.g., Wells Fargo fake accounts; Levitt & Schoenberg, 2020), to actors, producers, and politicians (e.g., long-term corruption of Belarus’ president; Simmons, 2020), to reporters and journalists (e.g., the National Broadcasting Company’s alleged cover-up; Farrow, 2019), to engineers and executives (e.g., Volkswagen’s emissions fraud; Vlasic, 2017), the public has been repeatedly shocked by the egregious behaviors committed by individuals recognized as high performers within their respective fields (Bennett, 2017). 

In the wake of such widespread unethical, corrupt, and exploitative behavior, many have wondered how supervisors could have systematically ignored the conduct of high-performing individuals for so long while they ascended organizational ladders. How could such misconduct have resulted in their advancement to leadership roles rather than stalled or derailed the transgressors’ careers?

The story of Carlos Ghosn at Nissan hints at why and when individuals’ unethical behavior (i.e., lying, cheating, and stealing; TreviƱo et al., 2006, 2014) may result in less punitive judgment (i.e., the extent to which observed behavior is morally evaluated as negative, incorrect, or inappropriate). During his 30-year career in the automotive industry, Ghosn differentiated himself as a high performer known for effective cost-cutting, strategic planning, and spearheading change; however, in 2018, he fell from grace over allegations of years of financial malfeasance and embezzlement (Leggett, 2019). When allegations broke, Nissan’s CEO stood firm in his punitive judgment that Ghosn’s behavior “cannot be tolerated by the company” (Kageyama, 2018). Still, many questioned why the executives levied judgment on the misconduct that they had overlooked for years. Tokyo bureau chief of the New York Times, Motoko Rich, reasoned that Ghosn “probably would have continued to get away with it … if the company was continuing to be successful. But it was starting to slow down. There were signs that the magic had gone” (Barbaro, 2019). Similarly, an executive pointed squarely to the relevance of Ghosn’s performance, lamenting: “what [had he] done for us lately?” (Chozick & Rich, 2018). As a high performer, Ghosn’s unethical behavior evaded punitive judgment and career consequences from Nissan executives, but their motivation to leniently judge Ghosn’s behavior seemed to wane with his level of performance. In her reporting, Rich observed: “you can get away with whatever you want as long as you’re successful. And once you’re not so successful anymore, then all that rule-breaking and brashness doesn’t look so attractive and appealing anymore” (Barbaro, 2019).

Monday, April 24, 2023

ChatGPT in the Clinic? Medical AI Needs Ethicists

Emma Bedor Hiland
The Hastings Center
Originally published by 10 MAR 23

Concerns about the role of artificial intelligence in our lives, particularly if it will help us or harm us, improve our health and well-being or work to our detriment, are far from new. Whether 2001: A Space Odyssey’s HAL colored our earliest perceptions of AI, or the much more recent M3GAN, these questions are not unique to the contemporary era, as even the ancient Greeks wondered what it would be like to live alongside machines.

Unlike ancient times, today AI’s presence in health and medicine is not only accepted, it is also normative. Some of us rely upon FitBits or phone apps to track our daily steps and prompt us when to move or walk more throughout our day. Others utilize chatbots available via apps or online platforms that claim to improve user mental health, offering meditation or cognitive behavioral therapy. Medical professionals are also open to working with AI, particularly when it improves patient outcomes. Now the availability of sophisticated chatbots powered by programs such as OpenAI’s ChatGPT have brought us closer to the possibility of AI becoming a primary source in providing medical diagnoses and treatment plans.

Excitement about ChatGPT was the subject of much media attention in late 2022 and early 2023. Many in the health and medical fields were also eager to assess the AI’s abilities and applicability to their work. One study found ChatGPT adept at providing accurate diagnoses and triage recommendations. Others in medicine were quick to jump on its ability to complete administrative paperwork on their behalf. Other research found that ChatGPT reached, or came close to reaching, the passing threshold for United States Medical Licensing Exam.

Yet the public at large is not as excited about an AI-dominated medical future. A study from the Pew Research Center found that most Americans are “uncomfortable” with the prospect of AI-provided medical care. The data also showed widespread agreement that AI will negatively affect patient-provider relationships, and that the public is concerned health care providers will adopt AI technologies too quickly, before they fully understanding the risks of doing so.


In sum: As AI is increasingly used in healthcare, this article argues that there is a need for ethical considerations and expertise to ensure that these systems are designed and used in a responsible and beneficial manner. Ethicists can play a vital role in evaluating and addressing the ethical implications of medical AI, particularly in areas such as bias, transparency, and privacy.

Wednesday, March 15, 2023

Why do we focus on trivial things? Bikeshedding explained

The Decision Lab
An Explainer
Originally posted: No idea

What is Bikeshedding?

Bikeshedding, also known as Parkinson’s law of triviality, describes our tendency to devote a disproportionate amount of our time to menial and trivial matters while leaving important matters unattended.

Where does this bias occur?

Do you ever remember sitting in class and having a teacher get off track from a lesson plan? They may have spent a large portion of your biology class time telling you a personal story and skimmed over important scientific theory. In such an instance, your teacher may have been a victim of bikeshedding, where they spent too long discussing something minor and lost track of what was important. Even though it may have been more entertaining to listen to their story, it did not help you acquire important information.

Although that scenario is one familiar to most, bikeshedding is an issue most commonly seen as a problem in corporate and consulting environments, especially during meetings. Imagine that at work, you have a meeting scheduled to discuss two important issues. The first issue is having to come up with ways in which the company can reduce carbon emissions. The second issue is discussing the implementation of standing desks at the office. It is clear that the first issue is more important, but it is also more complex. You and your coworkers will likely find it much easier to talk about whether or not to get standing desks, and as a result, a large portion of the scheduled meeting time is devoted to this more trivial matter. This disproportionate time allocation is known as bikeshedding and causes complicated matters to receive little attention.

(cut)

How to avoid it?

An awareness of bikeshedding is vital to countering its effects. There are various techniques that can be used in order to ensure that a group or team is being efficient with the time they spend on each topic.

One method to avoid bikeshedding is to have a separate meeting for any major, complex issue. If the topic is brought into a meeting with a long agenda, it can get lost under the trivial issues. However, if it is the main and only purpose for a meeting, it is difficult to avoid talking about it. Keeping meetings specific and focused on a particular issue can help counter bikeshedding.1 It may also be a good idea to have a particular person appointed to keep the team on task and pull back focus if the discussion does get sidetracked.

Another way of pulling the focus onto particular issues is to have less people present at the meeting. Bikeshedding is a big problem in group settings because simple issues entice multiple people to speak, which can drag them out. By only having the necessary people present at a meeting, even if a trivial issue is discussed, it will take up less time since there are fewer people to voice their opinion.


This bias may occur in psychotherapy when psychologist and patient focus on trivial issues that are easier to discuss or solve, rather than addressing critical, difficult issues.  There is a difference between creating a therapeutic attachment and bikeshedding.

Thursday, January 19, 2023

Things could be better

Mastroianni, A., & Ludwin-Peery, E. 
(2022, November 14). 
https://doi.org/10.31234/osf.io/2uxwk

Abstract

Eight studies document what may be a fundamental and universal bias in human imagination: people think things could be better. When we ask people how things could be different, they imagine how things could be better (Study 1). The bias doesn't depend on the wording of the question (Studies 2 and 3). It arises in people's everyday thoughts (Study 4). It is unrelated to people's anxiety, depression, and neuroticism (Study 5). A sample of Polish people responding in English show the same bias (Study 6), as do a sample of Chinese people responding in Mandarin (Study 7). People imagine how things could be better even though it's easier to come up with ways things could be worse (Study 8). Overall, it seems, human imagination has a bias: when people imagine how things could be, they imagine how things could be better.

(cut)

Why Does Human Imagination Work Like This?

Honestly, who knows. Brains are weird, man.

When all else fails, we can always turn to natural selection: maybe this bias helped our ancestors survive. Hungry, rain-soaked hunter-gatherers imagined food in their bellies and roofs over their heads and invented agriculture and architecture. Once warm and full, they out-reproduced their brethren who were busy imagining how much hungrier and wetter they could be.

But really, this is a mystery. We may have uncovered something fundamental about how human imagination works, but it might be a long time before we understand it.

Perhaps This is Why You Can Never Be Happy

Everybody knows about the hedonic treadmill: once you’re moderately happy, it’s hard to get happier. But nobody has ever really explained why this happens. People say things like, “oh, you get used to good things,” but that’s just a description, not an explanation. Why do people get used to good things?

Now we might have an answer: people get used to good things because they’re always imagining how things could be better. So even if things get better, you might not feel better. When you live in a cramped apartment, you dream of getting a house. When you get a house, you dream of a second house. Or you dream of lower property taxes. Or a hot tub. Or two hot tubs. And so on, forever.

Friday, October 28, 2022

Gender and ethnicity bias in medicine: a text analysis of 1.8 million critical care records

David M Markowitz
PNAS Nexus, Volume 1, Issue 4,
September 2022, pg157

Abstract

Gender and ethnicity biases are pervasive across many societal domains including politics, employment, and medicine. Such biases will facilitate inequalities until they are revealed and mitigated at scale. To this end, over 1.8 million caregiver notes (502 million words) from a large US hospital were evaluated with natural language processing techniques in search of gender and ethnicity bias indicators. Consistent with nonlinguistic evidence of bias in medicine, physicians focused more on the emotions of women compared to men and focused more on the scientific and bodily diagnoses of men compared to women. Content patterns were relatively consistent across genders. Physicians also attended to fewer emotions for Black/African and Asian patients compared to White patients, and physicians demonstrated the greatest need to work through diagnoses for Black/African women compared to other patients. Content disparities were clearer across ethnicities, as physicians focused less on the pain of Black/African and Asian patients compared to White patients in their critical care notes. This research provides evidence of gender and ethnicity biases in medicine as communicated by physicians in the field and requires the critical examination of institutions that perpetuate bias in social systems.

Significance Statement

Bias manifests in many social systems, including education, policing, and politics. Gender and ethnicity biases are also common in medicine, though empirical investigations are often limited to small-scale, qualitative work that fails to leverage data from actual patient–physician records. The current research evaluated over 1.8 million caregiver notes and observed patterns of gender and ethnicity bias in language. In these notes, physicians focused more on the emotions of women compared to men, and physicians focused less on the emotions of Black/African patients compared to White patients. These patterns are consistent with other work investigating bias in medicine, though this study is among the first to document such disparities at the language level and at a massive scale.

From the Discussion Section

This evidence is important because it establishes a link between communication patterns and bias that is often unobserved or underexamined in medicine. Bias in medicine has been predominantly revealed through procedural differences among ethnic groups, how patients of different ethnicities perceive their medical treatment, and structures that are barriers-to-entry for women and ethnic minorities. The current work revealed that the language found in everyday caregiver notes reflects disparities and indications of bias—new pathways that can complement other approaches to signal physicians who treat patients inequitably. Caregiver notes, based on their private nature, are akin to medical diaries for physicians as they attend to patients, logging the thoughts, feelings, and diagnoses of medical professionals. Caregivers have the herculean task of tending to those in need, though the current evidence suggests bias and language-based disparities are a part of this system. 

Tuesday, March 1, 2022

Don't ask where I'm from, ask where I'm a local - Taiye Selasi


When someone asks you where you're from … do you sometimes not know how to answer? Writer Taiye Selasi speaks on behalf of "multi-local" people, who feel at home in the town where they grew up, the city they live now and maybe another place or two. "How can I come from a country?" she asks. "How can a human being come from a concept?"

Saturday, January 30, 2021

Scientific communication in a post-truth society

S. Iyengar & D. S. Massey
PNAS Apr 2019, 116 (16) 7656-7661

Abstract

Within the scientific community, much attention has focused on improving communications between scientists, policy makers, and the public. To date, efforts have centered on improving the content, accessibility, and delivery of scientific communications. Here we argue that in the current political and media environment faulty communication is no longer the core of the problem. Distrust in the scientific enterprise and misperceptions of scientific knowledge increasingly stem less from problems of communication and more from the widespread dissemination of misleading and biased information. We describe the profound structural shifts in the media environment that have occurred in recent decades and their connection to public policy decisions and technological changes. We explain how these shifts have enabled unscrupulous actors with ulterior motives increasingly to circulate fake news, misinformation, and disinformation with the help of trolls, bots, and respondent-driven algorithms. We document the high degree of partisan animosity, implicit ideological bias, political polarization, and politically motivated reasoning that now prevail in the public sphere and offer an actual example of how clearly stated scientific conclusions can be systematically perverted in the media through an internet-based campaign of disinformation and misinformation. We suggest that, in addition to attending to the clarity of their communications, scientists must also develop online strategies to counteract campaigns of misinformation and disinformation that will inevitably follow the release of findings threatening to partisans on either end of the political spectrum.

(cut)

At this point, probably the best that can be done is for scientists and their scientific associations to anticipate campaigns of misinformation and disinformation and to proactively develop online strategies and internet platforms to counteract them when they occur. For example, the National Academies of Science, Engineering, and Medicine could form a consortium of professional scientific organizations to fund the creation of a media and internet operation that monitors networks, channels, and web platforms known to spread false and misleading scientific information so as to be able to respond quickly with a countervailing campaign of rebuttal based on accurate information through Facebook, Twitter, and other forms of social media.

Sunday, December 13, 2020

Polarization and extremism emerge from rational choice

Kvam, P. D., & Baldwin, M. 
(2020, October 21).

Abstract

Polarization is often thought to be the product of biased information search, motivated reasoning, or other psychological biases. However, polarization and extremism can still occur in the absence of any bias or irrational thinking. In this paper, we show that polarization occurs among groups of decision makers who are implementing rational choice strategies that maximize decision efficiency. This occurs because extreme information enables decision makers to make up their minds and stop considering new information, whereas moderate information is unlikely to trigger a decision. Furthermore, groups of decision makers will generate extremists -- individuals who hold strong views despite being uninformed and impulsive. In re-analyses of seven previous empirical studies on both perceptual and preferential choice, we show that both polarization and extremism manifest across a wide variety of choice paradigms. We conclude by offering theoretically-motivated interventions that could reduce polarization and extremism by altering the incentives people have when gathering information.

Conclusions

In a decision scenario that incentivizes a trade-off between time and decision quality, a population of rational decision makers will become polarized. In this paper, we have shown this through simulations, a mathematical proof (supplementary materials) and demonstrated it empirically in seven studies.   This  leads  us  to  an  unfortunate  but  unavoidable  conclusion that decision making is a bias-inducing process by which  participants  gather  representative  information  from their environment and, through the decision rules they implement, distort it toward the extremes. Such a process also generates extremists, who hold extreme views and carry undue influence over cultural discourse (Navarro et al.,2018) despite being relatively uninformed and impulsive (low thresh-olds;Kim & Lee,2011). We have suggested several avenues for interventions, foremost among them providing incentives favoring estimation or judgments as opposed to incentives for timely decision making. Our hope is that future work testing and implementing these interventions will reduce the prevalence of polarization and extremism across social domains currently occupied by decision makers.

Tuesday, November 17, 2020

Violent CRED s toward Out-Groups Increase Trustworthiness: Preliminary Experimental Evidence

Řeznƭček, D., & Kundt, R. (2020).
Journal of Cognition and Culture, 20(3-4), 262-281. 
doi: https://doi.org/10.1163/15685373-12340084

Abstract

In the process of cultural learning, people tend to acquire mental representations and behavior from prestigious individuals over dominant ones, as prestigious individuals generously share their expertise and know-how to gain admiration, whereas dominant ones use violence, manipulation, and intimidation to enforce obedience. However, in the context of intergroup conflict, violent thoughts and behavior that are otherwise associated with dominance can hypothetically become prestigious because parochial altruists, who engage in violence against out-groups, act in the interest of their group members, therefore prosocially. This shift would imply that for other in-groups, individuals behaving violently toward out-groups during intergroup conflicts become simultaneously prestigious, making them desirable cultural models to learn from. Using the mechanism of credibility enhancing displays (CRED s), this article presents preliminary vignette-based evidence that violent CRED s toward out-groups during intergroup conflict increase the perceived trustworthiness of a violent cultural model.

From the Discussion section

We found support for hypotheses H1–3 regarding the seemingly paradoxical relationship between trustworthiness, prestige, dominance, and violence during an intergroup conflict (see Figures 1 and 2). Violent cultural model’s trustworthiness was positively predicted by CREDs and prestige, while it was negatively predicted by dominance. This suggests that in-groups violent toward out-groups during an intergroup conflict are not perceived as dominant manipulators who are better to be avoided and not learned from but rather as prestigious heroes who deserve to be venerated. Thus, it appears that a positive perception of violence toward out-groups, as modeled or tested by various researchers (Bowles, 2008; Castano & Leidner, 2012; Choi & Bowles, 2007;Cohen, Montoya, & Insko, 2006; Roccas, Klar, & Liviatan, 2006), is an eligible notion. Our study offers preliminary evidence for the suggestion that fighting violently for one’s group may increase the social status of fighters via prestige, not dominance.

Tuesday, September 22, 2020

How to be an ethical scientist

W. A. Cunningham, J. J. Van Bavel,
& L. H. Somerville
Science Magazine
Originally posted 5 August 20

True discovery takes time, has many stops and starts, and is rarely neat and tidy. For example, news that the Higgs boson was finally observed in 2012 came 48 years after its original proposal by Peter Higgs. The slow pace of science helps ensure that research is done correctly, but it can come into conflict with the incentive structure of academic progress, as publications—the key marker of productivity in many disciplines—depend on research findings. Even Higgs recognized this problem with the modern academic system: “Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough.”

It’s easy to forget about the “long view” when there is constant pressure to produce. So, in this column, we’re going to focus on the type of long-term thinking that advances science. For example, are you going to cut corners to get ahead, or take a slow, methodical approach? What will you do if your experiment doesn’t turn out as expected? Without reflecting on these deeper issues, we can get sucked into the daily goals necessary for success while failing to see the long-term implications of our actions.

Thinking carefully about these issues will not only impact your own career outcomes, but it can also impact others. Your own decisions and actions affect those around you, including your labmates, your collaborators, and your academic advisers. Our goal is to help you avoid pitfalls and find an approach that will allow you to succeed without impairing the broader goals of science.

Be open to being wrong

Science often advances through accidental (but replicable) findings. The logic is simple: If studies always came out exactly as you anticipated, then nothing new would ever be learned. Our previous theories of the world would be just as good as they ever were. This is why scientific discovery is often most profound when you stumble on something entirely new. Isaac Asimov put it best when he said, “The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny ... .’”

The info is here.

Monday, March 2, 2020

The Dunning-Kruger effect, or why the ignorant think they’re experts

Alexandru Micu
zmescience.com
Originally posted 13 Feb 20

Here is an excerpt:

It’s not specific only to technical skills but plagues all walks of human existence equally. One study found that 80% of drivers rate themselves as above average, which is literally impossible because that’s not how averages work. We tend to gauge our own relative popularity the same way.

It isn’t limited to people with low or nonexistent skills in a certain matter, either — it works on pretty much all of us. In their first study, Dunning and Kruger also found that students who scored in the top quartile (25%) routinely underestimated their own competence.

A fuller definition of the Dunning-Kruger effect would be that it represents a bias in estimating our own ability that stems from our limited perspective. When we have a poor or nonexistent grasp on a topic, we literally know too little of it to understand how little we know. Those who do possess the knowledge or skills, however, have a much better idea of where they sit. But they also think that if a task is clear and simple to them, it must be so for everyone else as well.

A person in the first group and one in the second group are equally liable to use their own experience and background as the baseline and kinda just take it for granted that everyone is near that baseline. They both partake in the “illusion of confidence” — for one, that confidence is in themselves, for the other, in everyone else.

The info is here.

Monday, January 27, 2020

The Character of Causation: Investigating the Impact of Character, Knowledge, and Desire on Causal Attributions

Justin Sytsma
(2019) Preprint

Abstract

There is a growing consensus that norms matter for ordinary causal attributions. This has important implications for philosophical debates over actual causation. Many hold that theories of actual causation should coincide with ordinary causal attributions, yet those attributions often diverge from the theories when norms are involved. There remains substantive debate about why norms matter for causal attributions, however. In this paper, I consider two competing explanations—Alicke’s bias view, which holds that the impact of norms reflects systematic error (suggesting that ordinary causal attributions should be ignored in the philosophical debates), and our responsibility view, which holds that the impact of norms reflects the appropriate application of the ordinary concept of causation (suggesting that philosophical accounts are not analyzing the ordinary concept). I investigate one key difference between these views: the bias view, but not the responsibility view, predicts that “peripheral features” of the agents in causal scenarios—features that are irrelevant to appropriately assessing responsibility for an outcome, such as general character—will also impact ordinary causal attributions. These competing predictions are tested for two different types of scenarios. I find that information about an agent’s character does not impact causal attributions on its own. Rather, when character shows an effect it works through inferences to relevant features of the agent. In one scenario this involves inferences to the agent’s knowledge of the likely result of her action and her desire to bring about that result, with information about knowledge and desire each showing an independent effect on causal attributions.

From the Conclusion:

Alicke’s bias view holds that not only do features of the agent’s mental states matter, such as her knowledge and desires concerning the norm and the outcome, but also peripheral features of the agent whose impact could only reasonably be explained in terms of bias. In contrast, our responsibility view holds that the impact of norms does not reflect bias, but rather that ordinary causal attributions issue from the appropriate application of a concept with a normative component. As such, we predict that while judgments about the agent’s mental states that are relevant to adjudicating responsibility will matter, peripheral features of the agent will only matter insofar as they warrant an inference to other features of the agent that are relevant.

 In line with the responsibility view and against the bias view, the results of the studies presented in this paper suggest that information relevant to assessing an agent’s character matters but only when it warrants an inference to a non-peripheral feature, such as the agent’s negligence in the situation or her knowledge and desire with regard to the outcome. Further, the results indicate that information about an agent’s knowledge and desire both impact ordinary causal attributions in the scenario tested. This raises an important methodological issue for empirical work on ordinary causal attributions: researchers need to carefully consider and control for the inferences that participants might draw concerning the agents’ mental states and motivations.

The research is here.

Friday, July 12, 2019

Tribalism is Human Nature

Cory Clark, Brittany Liu, Bo Winegard, and Peter Ditto
Pre-print

Abstract

Humans evolved in the context of intense intergroup competition, and groups comprised of loyal members more often succeeded than those that were not. Therefore, selective pressures have consistently sculpted human minds to be "tribal," and group loyalty and concomitant cognitive biases likely exist in all groups. Modern politics is one of the most salient forms of modern coalitional conflict and elicits substantial cognitive biases. Given the common evolutionary history of liberals and conservatives, there is little reason to expect pro-tribe biases to be higher on one side of the political spectrum than the other. We call this the evolutionarily plausible null hypothesis and recent research has supported it. In a recent meta-analysis, liberals and conservatives showed similar levels of partisan bias, and a number of pro-tribe cognitive tendencies often ascribed to conservatives (e.g., intolerance toward dissimilar others) have been found in similar degrees in liberals. We conclude that tribal bias is a natural and nearly ineradicable feature of human cognition, and that no group—not even one’s own—is immune.

Here is part of the Conclusion:

Humans are tribal creatures. They were not designed to reason dispassionately about the world; rather, they were designed to reason in ways that promote the interests of their coalition (and hence, themselves). It would therefore be surprising if a particular group of individuals did not display such tendencies, and recent work suggests, at least in the U.S. political sphere, that both liberals and conservatives are substantially biased—and to similar degrees. Historically, and perhaps even in modern society, these tribal biases are quite useful for group cohesion but perhaps also for other moral purposes (e.g., liberal bias in favor of disadvantaged groups might help increase equality). Also, it is worth noting that a bias toward viewing one’s own tribe in a favorable light is not necessarily irrational. If one’s goal is to be admired among one’s own tribe, fervidly supporting their agenda and promoting their goals, even if that means having or promoting erroneous beliefs, is often a reasonable strategy (Kahan et al., 2017). The incentives for holding an accurate opinion about global climate change, for example, may not be worth the social rejection and loss of status that could accompany challenging the views of one’s political ingroup. However, these biases decrease the likelihood of consensus across political divides. Thus, developing effective strategies for disincentivizing political tribalism and promoting the much less natural but more salutary tendencies toward civil political discourse and reasonable compromise are crucial priorities for future research. A useful theoretical starting point is that tribalism and concomitant biases are part of human nature, and that no group, not even one’s own, is immune.

A pre-print is here.

Tuesday, March 26, 2019

Does AI Ethics Have a Bad Name?

Calum Chace
Forbes.com
Originally posted March 7, 2019

Here is an excerpt:

Artificial intelligence is a technology, and a very powerful one, like nuclear fission.  It will become increasingly pervasive, like electricity. Some say that its arrival may even turn out to be as significant as the discovery of fire.  Like nuclear fission, electricity and fire, AI can have positive impacts and negative impacts, and given how powerful it is and it will become, it is vital that we figure out how to promote the positive outcomes and avoid the negative ones.

It's the bias that concerns people in the AI ethics community.  They want to minimise the amount of bias in the data which informs the AI systems that help us to make decisions – and ideally, to eliminate the bias altogether.  They want to ensure that tech giants and governments respect our privacy at the same time as they develop and deliver compelling products and services. They want the people who deploy AI to make their systems as transparent as possible so that in advance or in retrospect, we can check for sources of bias and other forms of harm.

But if AI is a technology like fire or electricity, why is the field called “AI ethics”?  We don’t have “fire ethics” or “electricity ethics,” so why should we have AI ethics?  There may be a terminological confusion here, and it could have negative consequences.

One possible downside is that people outside the field may get the impression that some sort of moral agency is being attributed to the AI, rather than to the humans who develop AI systems.  The AI we have today is narrow AI: superhuman in certain narrow domains, like playing chess and Go, but useless at anything else. It makes no more sense to attribute moral agency to these systems than it does to a car or a rock.  It will probably be many years before we create an AI which can reasonably be described as a moral agent.

The info is here.

Tuesday, February 26, 2019

Strengthening Our Science: AGU Launches Ethics and Equity Center

Robyn Bell
EOS.org
Originally published February 14, 2019

In the next century, our species will face a multitude of challenges. A diverse and inclusive community of researchers ready to lead the way is essential to solving these global-scale challenges. While Earth and space science has made many positive contributions to society over the past century, our community has suffered from a lack of diversity and a culture that tolerates unacceptable and divisive conduct. Bias, harassment, and discrimination create a hostile work climate, undermining the entire global scientific enterprise and its ability to benefit humanity.

As we considered how our Centennial can launch the next century of amazing Earth and space science, we focused on working with our community to build diverse, inclusive, and ethical workplaces where all participants are encouraged to develop their full potential. That’s why I’m so proud to announce the launch of the AGU Ethics and Equity Center, a new hub for comprehensive resources and tools designed to support our community across a range of topics linked to ethics and workplace excellence. The Center will provide resources to individual researchers, students, department heads, and institutional leaders. These resources are designed to help share and promote leading practices on issues ranging from building inclusive environments, to scientific publications and data management, to combating harassment, to example codes of conduct. AGU plans to transform our culture in scientific institutions so we can achieve inclusive excellence.

The info is here.

Monday, December 17, 2018

Am I a Hypocrite? A Philosophical Self-Assessment

John Danaher
Philosophical Disquisitions
Originally published November 9, 2018

Here are two excerpts:

The common view among philosophers is that hypocrisy is a moral failing. Indeed, it is often viewed as one of the worst moral failings. Why is this? Christine McKinnon’s article ‘Hypocrisy, with a Note on Integrity’ provides a good, clear defence of this view. The article itself is a classic exercise in analytical philosophical psychology. It tries to clarify the structure of hypocrisy and explain why we should take it so seriously. It does so by arguing that there are certain behaviours, desires and dispositions that are the hallmark of the hypocrite and that these behaviours, desires and dispositions undermine our system of social norms.

McKinnon makes this case by considering some paradigmatic instances of hypocrisy, and identifying the necessary and sufficient conditions that allow us to label these as instances of hypocrisy. My opening example of my email behaviour probably fits this paradigmatic mode — despite my protestations to the contrary. A better example, however, might be religious hypocrisy. There have been many well-documented historical cases of this, but let’s not focus on these. Let’s instead imagine a case that closely parallels these historical examples. Suppose there is a devout fundamentalist Christian preacher. He regularly preaches about the evils of homosexuality and secularism and professes to be heterosexual and devout. He calls upon parents to disown their homosexual children or to subject them to ‘conversion therapy’. Then, one day, this preacher is discovered to himself be a homosexual. Not just that, it turns out he has a long-term male partner that he has kept hidden from the public for over 20 years, and that they were recently married in a non-religious humanist ceremony.

(cut)

In other words, what I refer to as my own hypocrisy seems to involve a good deal of self-deception and self-manipulation, not (just) the manipulation of others. That’s why I was relieved to read Michael Statman’s article on ‘Hypocrisy and Self-Deception’. Statman wants to get away from the idea of the hypocrite as moral cartoon character. Real people are way more interesting than that. As he sees it, the morally vicious form of hypocrisy that is the focus of McKinnon’s ire tends to overlap with and blur into self-deception much more frequently than she allows. The two things are not strongly dichotomous. Indeed, people can slide back and forth between them with relative ease: the self-deceived can slide into hypocrisy and the hypocrite can slide into self-deception.

Although I am attracted to this view, Statman points out that it is a tough sell. 

Thursday, November 15, 2018

Expectations Bias Moral Evaluations

Derek Powell & Zachary Horne
PsyArXiv
Originally posted September 13, 2018

Abstract

People’s expectations play an important role in their reactions to events. There is often disappointment when events fail to meet expectations and a special thrill to having one’s expectations exceeded. We propose that expectations influence evaluations through information-theoretic principles: less expected events do more to inform us about the state of the world than do more expected events. An implication of this proposal is that people may have inappropriately muted responses to morally significant but expected events. In two preregistered experiments, we found that people’s judgments of morally-significant events were affected by the likelihood of that event. People were more upset about events that were unexpected (e.g., a robbery at a clothing store) than events that were more expected (e.g., a robbery at a convenience store). We argue that this bias has pernicious moral consequences, including leading to reduced concern for victims in most need of help.

The research/preprint is here.