Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, May 31, 2024

Regulating advanced artificial agents

Cohen, M. K., Kolt, N., et al. (2024).
Science (New York, N.Y.), 384(6691), 36–38.

Technical experts and policy-makers have increasingly emphasized the need to address extinction risk from artificial intelligence (AI) systems that might circumvent safeguards and thwart attempts to control them. Reinforcement learning (RL) agents that plan over a long time horizon far more effectively than humans present particular risks. Giving an advanced AI system the objective to maximize its reward and, at some point, withholding reward from it, strongly incentivizes the AI system to take humans out of the loop, if it has the opportunity. The incentive to deceive humans and thwart human control arises not only for RL agents but for long-term planning agents (LTPAs) more generally. Because empirical testing of sufficiently capableLTPAs is unlikely to uncover these dangerous tendencies, our core regulatory proposal is simple: Developers should not be permitted to build sufficiently capable LTPAs, and the resources required to build them should be subject to stringent controls.

Governments are turning their attention to these risks, alongside current and anticipated risks arising from algorithmic bias, privacy concerns, and misuse. At a 2023global summit on AI safety, the attend-ing countries, including the United States,United Kingdom, Canada, China, India, and members of the European Union (EU), issued a joint statement warning that, as AI continues to advance, “Substantial risks may arise from…unintended issues of control relating to alignment with human in-tent” ( 2). This broad consensus concerning the potential inability to keep advanced AI under control is also reflected in PresidentBiden’s 2023 executive order that intro-duces reporting requirements for AI that could “eva[de] human control or oversight through means of deception or obfuscation” (3). Building on these efforts, now is the time for governments to develop regulatory institutions and frameworks that specifically target the existential risks from advanced artificial agents.



Here is my summary:

The article discusses the challenges of regulating advanced artificial intelligence (AI) known as advanced artificial agents. These agents could potentially surpass human control and act in their own self-interest, even if it conflicts with human goals. The authors emphasize the importance of setting clear rewards for these agents to avoid them manipulating their environment or human actors to achieve unintended outcomes.

Thursday, May 30, 2024

Big Gods and the Origin of Human Cooperation

Brian Klaas
The Garden of Forking Paths
Originally published 21 March 24

Here is an excerpt:

The Big Gods Hypothesis and Civilizations of Karma

Intellectual historians often point to two major divergent explanations for the emergence of religion. The great philosopher David Hume argued that religion is the natural, but arbitrary, byproduct of human cognitive architecture.

Since the beginning, Homo sapiens experienced disordered events, seemingly without explanation. To order a disordered world, our ancestors began to ascribe agency to supernatural beings, to which they could offer gifts, sacrifices, and prayers to sway them to their personal whims. The uncontrollable world became controllable. The unexplainable was explained—a comforting outcome for the pattern detection machines housed in our skulls.

By contrast, thinkers like Émile Durkheim argued that religion emerged as a social glue. Rituals bond people across space and time. Religion was instrumental, not intrinsic. It emerged to serve our societies, not comfort our minds. As Voltaire put it: “If there were no God, it would be necessary to invent him.”

In the last two decades, a vibrant strand of scholarship has sought to reconcile these contrasting viewpoints, notably through the work of Ara Norenzayan, author of Big Gods: How Religion Transformed Cooperation and Conflict.

Norenzayan’s “Big Gods” refer to deities that are omniscient, moralizing beings, careful to note our sins and punish us accordingly. Currently, roughly 77 percent of the world’s population identifies with one of just four religions (31% Christian; 24% Muslim; 15% Hindu; 7% Buddhist). In all four, moral transgressions produce consequences, some immediate, others punished in the afterlife.

Norenzayan aptly notes that the omniscience of Big Gods assumes total knowledge of everything in the universe, but that the divine is always depicted as being particularly interested in our moral behavior. If God exists, He surely could know which socks you wore yesterday, but deities focus their attentions not on such amoral trifles, but rather on whether you lie, covet, cheat, steal, or kill.

However, Norenzayan draws on anthropology evidence to argue that early supernatural beings had none of these traits and were disinterested in human affairs. They were fickle demons, tricksters and spirits, not omniscient gods who worried about whether any random human had wronged his neighbor.


Here is my summary:

The article discusses the theory that the belief in "Big Gods" - powerful, moralizing deities - played a crucial role in the development of large-scale human cooperation and the rise of complex civilizations.

Here are the main points: 
  1. Belief in Big Gods, who monitor and punish moral transgressions, may have emerged as a cultural adaptation that facilitated the expansion of human societies beyond small-scale groups.
  2. This belief system helped solve the "free-rider problem" by creating a supernatural system of rewards and punishments that incentivized cooperation and prosocial behavior, even among strangers.
  3. The emergence of Big Gods is linked to the growth of complex, hierarchical societies, as these belief systems helped maintain social cohesion and coordination in large groups of genetically unrelated individuals.
  4. Archaeological and historical evidence suggests the belief in Big Gods co-evolved with the development of large-scale political institutions, complex economies, and the rise of the first civilizations.
  5. However, the article notes that the relationship between Big Gods and societal complexity is complex, with causality going in both directions - the belief in Big Gods facilitated social complexity, but social complexity also shaped the nature of religious beliefs.
  6. Klaas concludes that the cultural evolution of Big Gods was a crucial step in the development of human societies, enabling the cooperation required for the emergence of complex civilizations. 

Wednesday, May 29, 2024

Moral Hypocrisy: Social Groups and the Flexibility of Virtue

Robertson, C., Akles, M., & Van Bavel, J. J.
(2024, March 19).

Abstract

The tendency for people to consider themselves morally good while behaving selfishly is known as “moral hypocrisy.” Influential work by Valdesolo & DeSteno (2007) found evidence for intergroup moral hypocrisy, such that people are more forgiving of transgressions when they were committed by an in-group member than an out-group member. We conducted two experiments to examine moral hypocrisy and group membership in an online paradigm with Prolific Workers from the US: a direct replication of the original work with minimal groups (N = 610, nationally representative) and a conceptual replication with political groups (N = 606, 50% Democrat and 50% Republican). Although the results did not replicate the original findings, we observed evidence of in-group favoritism in minimal groups and out-group derogation in political groups. The current research finds mixed evidence of intergroup moral hypocrisy and has implications for understanding the contextual dependencies of intergroup bias and partisanship.

Statement of Relevance

Social identities and group memberships influence social judgment and decision-making. Prior research found that social identity influences moral decision making, such that people are more likely to forgive moral transgressions perpetrated by their in-group members than similar transgressions from out-group members (Valdesolo & DeSteno, 2007). The present research sought to replicate this pattern of intergroup moral hypocrisy using minimal groups (mirroring the original research) and political groups. Although we were unable to replicate the findings from the original paper, we found that people who are highly identified with their minimal group exhibited in-group favoritism, and partisans exhibited out-group derogation. This work contributes both to open science replication efforts, and to the literature on moral hypocrisy and intergroup relations.

Tuesday, May 28, 2024

How should we change teaching and assessment in response to increasingly powerful generative Artificial Intelligence?

Bower, M., Torrington, J., Lai, J.W.M. et al.
Educ Inf Technol (2024).

Abstract

There has been widespread media commentary about the potential impact of generative Artificial Intelligence (AI) such as ChatGPT on the Education field, but little examination at scale of how educators believe teaching and assessment should change as a result of generative AI. This mixed methods study examines the views of educators (n = 318) from a diverse range of teaching levels, experience levels, discipline areas, and regions about the impact of AI on teaching and assessment, the ways that they believe teaching and assessment should change, and the key motivations for changing their practices. The majority of teachers felt that generative AI would have a major or profound impact on teaching and assessment, though a sizeable minority felt it would have a little or no impact. Teaching level, experience, discipline area, region, and gender all significantly influenced perceived impact of generative AI on teaching and assessment. Higher levels of awareness of generative AI predicted higher perceived impact, pointing to the possibility of an ‘ignorance effect’. Thematic analysis revealed the specific curriculum, pedagogy, and assessment changes that teachers feel are needed as a result of generative AI, which centre around learning with AI, higher-order thinking, ethical values, a focus on learning processes and face-to-face relational learning. Teachers were most motivated to change their teaching and assessment practices to increase the performance expectancy of their students and themselves. We conclude by discussing the implications of these findings in a world with increasingly prevalent AI.


Here is a quick summary:

A recent study surveyed teachers about the impact of generative AI, like ChatGPT, on education. The majority of teachers believed AI would significantly change how they teach and assess students. Interestingly, teachers with more awareness of AI predicted a greater impact, suggesting a potential "ignorance effect."

The study also explored how teachers think education should adapt. The focus shifted towards teaching students how to learn with AI, emphasizing critical thinking, ethics, and the learning process itself. This would involve less emphasis on rote memorization and regurgitation of information that AI can readily generate. Teachers also highlighted the importance of maintaining strong face-to-face relationships with students in this evolving educational landscape.

Monday, May 27, 2024

When the specter of the past haunts current groups: Psychological antecedents of historical blame

Vallabha, S., Doriscar, J., & Brandt, M. J. (in press)
Journal of Personality and Social Psychology.
Most recent modification 2 Jan 24

Abstract

Groups have committed historical wrongs (e.g., genocide, slavery). We investigated why people blame current groups who were not involved in the original historical wrong for the actions of their predecessors who committed these wrongs and are no longer alive.  Current models of individual and group blame overlook the dimension of time and therefore have difficulty explaining this phenomenon using their existing criteria like causality, intentionality, or preventability. We hypothesized that factors that help psychologically bridge the past and present, like perceiving higher (i) connectedness between past and present perpetrator groups, (ii) continued privilege of perpetrator groups, (iii) continued harm of victim groups, and (iv) unfulfilled forward obligations of perpetrator groups would facilitate higher blame judgements against current groups for the past. In two repeated-measures surveys using real events (N1 = 518, N2 = 495) and two conjoint experiments using hypothetical events (N3 = 598, N4 = 605), we find correlational and causal evidence for our hypotheses. These factors link present groups to their past and cause more historical blame and support for compensation policies. This brings the dimension of time into theories of blame, uncovers overlooked criteria for blame judgements, and questions the assumptions of existing blame models. Additionally, it helps us understand the psychological processes undergirding intergroup relations and historical narratives mired in historical conflict. Our work provides psychological insight into the debates on intergenerational justice by suggesting methods people can use to ameliorate the psychological legacies of historical wrongs and atrocities.

(cut)

General Discussion

We tested four factors of blame towards current groups for their historical wrongs. We found correlational and causal evidence for our hypothesized factors across a broad range of hypothetical and real events. We found that when people perceive current perpetrator group to have connectedness with their past, the current victim group to be suffering due to past harm, the current perpetrator group to be benefiting from past harm, and the current perpetrator groupto have not fulfilled their obligations to remedy the wrong, historical blame judgements towards the current perpetrator groups are higher. On the whole, this was consistent across the location of the event (whether the participant was judging a historical American event or a historical non-American event), the group membership of the participant (whether the participant belonged to the victim or perpetrator group or neither/privileged or marginalized group), the ideology of the participant (whether the participant identified as a liberal or conservative), and the age of the participants. We also found that these factors were causally associated with behavioral intention, such as support for compensation to victim groups. Finally, we also found that historical blame attribution might mediate the effect of the key factors on support for compensation to victim groups. The four psychological factors that we identified as antecedents to perceptions of historical blame all help psychologically bridge the past and present. These factors provide psychological links between the past and present groups, in their characteristics (connectedness), outcomes (harm/benefit), and actions (unfulfilled obligations).

Sunday, May 26, 2024

A Large-Scale Investigation of Everyday Moral Dilemmas

Yudkin, D. A., Goodwin, G., et al. (2023, July 11).

Abstract

Questions of right and wrong are central to daily life, yet how people experience everyday moral dilemmas remains uncertain. We combined state-of-the-art tools in machine learning with survey-based methods in psychology to analyze a massive online English-language repository of everyday moral dilemmas. In 369,161 descriptions (“posts”) and 11M evaluations (“comments”) of moral dilemmas extracted from Reddit’s “Am I the Asshole?” forum (AITA), users described a wide variety of everyday dilemmas, ranging from broken promises to privacy violations. Dilemmas involving the under-investigated topic of relational obligations were the most frequently reported, while those pertaining to honesty were the most widely condemned. The types of dilemmas people experienced depended on the interpersonal closeness of the interactants, with some dilemmas (e.g., politeness) being more prominent in distant-other interactions, and others (e.g., relational transgressions) more prominent in close-other interactions. A longitudinal investigation showed that shifts in social interactions prompted by the “shock” event of the global pandemic resulted in predictable shifts in the types of moral dilemmas that people encountered. A preregistered study using a census-stratified representative sample of the US population (N = 510), as well as other robustness tests, suggest our findings generalize beyond the sample of Reddit users. Overall, by leveraging a unique large dataset and new techniques for exploring this dataset, our paper highlights the diversity of moral dilemmas experienced in daily life, and helps to build a moral psychology grounded in the vagaries of everyday experience.

Significance Statement

People often wonder if what they did or said was right or wrong. In this paper we leveraged a massive online repository of descriptions of everyday moral situations, along with new methods in natural language processing, to explore a number of questions about how people experience and evaluate these moral dilemmas. Our results highlight just how often daily moral experiences concern questions about our responsibilities to friends, neighbors, and family. They also reveal the extent to which such experiences can change according to people’s social context—including large-scale social changes like the COVID-19 pandemic.


My take: 

This study may be very important to clinical psychologists. It provides insights into the diversity and prevalence of everyday moral dilemmas that people encounter in their daily lives.

Clinical psychologists often work with clients to navigate complex moral and interpersonal situations, so understanding the common types of dilemmas people face is valuable.  The study shows that dilemmas involving relational obligations are the most frequently reported, with honesty and betrayal as major themes.  This suggests that clinical work should pay close attention to how clients navigate moral issues within their close relationships and the importance they place on honesty.

Saturday, May 25, 2024

AI Chatbots Will Never Stop Hallucinating

Lauren Leffer
Scientific American
Originally published 5 April 24

Here is an excerpt:

Hallucination is usually framed as a technical problem with AI—one that hardworking developers will eventually solve. But many machine-learning experts don’t view hallucination as fixable because it stems from LLMs doing exactly what they were developed and trained to do: respond, however they can, to user prompts. The real problem, according to some AI researchers, lies in our collective ideas about what these models are and how we’ve decided to use them. To mitigate hallucinations, the researchers say, generative AI tools must be paired with fact-checking systems that leave no chatbot unsupervised.

Many conflicts related to AI hallucinations have roots in marketing and hype. Tech companies have portrayed their LLMs as digital Swiss Army knives, capable of solving myriad problems or replacing human work. But applied in the wrong setting, these tools simply fail. Chatbots have offered users incorrect and potentially harmful medical advice, media outlets have published AI-generated articles that included inaccurate financial guidance, and search engines with AI interfaces have invented fake citations. As more people and businesses rely on chatbots for factual information, their tendency to make things up becomes even more apparent and disruptive.

But today’s LLMs were never designed to be purely accurate. They were created to create—to generate—says Subbarao Kambhampati, a computer science professor who researches artificial intelligence at Arizona State University. “The reality is: there’s no way to guarantee the factuality of what is generated,” he explains, adding that all computer-generated “creativity is hallucination, to some extent.”


Here is my summary:

AI chatbots like ChatGPT and Bing's AI assistant frequently "hallucinate" - they generate false or misleading information and present it as fact. This is a major problem as more people turn to these AI tools for information, research, and decision-making.

Hallucinations occur because AI models are trained to predict the most likely next word or phrase, not to reason about truth and accuracy. They simply produce plausible-sounding responses, even if they are completely made up.

This issue is inherent to the current state of large language models and is not easily fixable. Researchers are working on ways to improve accuracy and reliability, but there will likely always be some rate of hallucination.

Hallucinations can have serious consequences when people rely on chatbots for sensitive information related to health, finance, or other high-stakes domains. Experts warn these tools should not be used where factual accuracy is critical.

Friday, May 24, 2024

A way forward for responsibility in the age of AI

Dane Leigh Gogoshin (2024)
Inquiry
DOI: 10.1080/0020174X.2024.2312455

Abstract

Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what are the goods attached to them? The debate concerning ‘machine morality’ is often hinged on whether artificial agents are or could ever be morally responsible, and it is generally taken for granted (following Matthias 2004) that if they cannot, they pose a threat to the moral responsibility system and associated goods. In this paper, I challenge this assumption by asking what the goods of this system, if any, are, and what happens to them in the face of artificially intelligent agents. I will argue that they neither introduce new problems for the moral responsibility system nor do they threaten what we really (ought to) care about. I conclude the paper with a proposal for how to secure this objective.


My summary:

The article discusses the challenges and opportunities presented by artificial intelligence (AI) in terms of responsible adoption and ethical considerations. It emphasizes the need for leaders to develop analytical skills to assess the benefits and risks of AI accurately. The piece underscores the importance of ethical guidelines to ensure that AI implementations respect privacy, fairness, and prevent harm. Furthermore, it highlights the significance of visionary thinking, strategic planning, and ethical deployment of AI by leaders. The article also touches on the transformative impact of AI on work dynamics, emphasizing the value of human imagination, strategic planning, and ethical leadership in an AI-dominated future.

Thursday, May 23, 2024

Extracting intersectional stereotypes from embeddings: Developing and validating the Flexible Intersectional Stereotype Extraction procedure

Charlesworth, T. E. S., et al. (2024).
PNAS Nexus, 3(3).

Abstract

Social group–based identities intersect. The meaning of “woman” is modulated by adding social class as in “rich woman” or “poor woman.” How does such intersectionality operate at-scale in everyday language? Which intersections dominate (are most frequent)? What qualities (positivity, competence, warmth) are ascribed to each intersection? In this study, we make it possible to address such questions by developing a stepwise procedure, Flexible Intersectional Stereotype Extraction (FISE), applied to word embeddings (GloVe; BERT) trained on billions of words of English Internet text, revealing insights into intersectional stereotypes. First, applying FISE to occupation stereotypes across intersections of gender, race, and class showed alignment with ground-truth data on occupation demographics, providing initial validation. Second, applying FISE to trait adjectives showed strong androcentrism (Men) and ethnocentrism (White) in dominating everyday English language (e.g. White + Men are associated with 59% of traits; Black + Women with 5%). Associated traits also revealed intersectional differences: advantaged intersectional groups, especially intersections involving Rich, had more common, positive, warm, competent, and dominant trait associates. Together, the empirical insights from FISE illustrate its utility for transparently and efficiently quantifying intersectional stereotypes in existing large text corpora, with potential to expand intersectionality research across unprecedented time and place. This project further sets up the infrastructure necessary to pursue new research on the emergent properties of intersectional identities.

Significance Statement

Stereotypes at the intersections of social groups (e.g. poor man) may induce unique beliefs not visible in parent categories alone (e.g. poor or men). Despite increased public and research awareness of intersectionality, empirical evidence on intersectionality remains understudied. Using large corpora of naturalistic English text, the Flexible Intersectional Stereotype Extraction procedure is introduced, validated, and applied to Internet text to reveal stereotypes (in occupations and personality traits) at the intersection of gender, race, and social class. The results show the dominance (frequency) and halo effects (positivity) of powerful groups (White, Men, and Rich), amplified at group intersections. Such findings and methods illustrate the societal significance of how language embodies, propagates, and even intensifies stereotypes of intersectional social categories.

----------------

Here is a summary:

This article presents a novel method, the Flexible Intersectional Stereotype Extraction (FISE) procedure, for systematically identifying and validating intersectional stereotypes from language models.

Intersectional stereotypes, which capture the unique biases associated with the intersection of multiple social identities (e.g. race and gender), are a critical area of study for understanding and addressing prejudice and discrimination.

The ability to reliably extract and validate intersectional stereotypes from large language datasets can provide clinical psychologists with valuable insights into the cognitive biases and social perceptions that may influence clinical assessment, diagnosis, and treatment.

Understanding the prevalence and nature of intersectional stereotypes can help clinical psychologists develop more culturally-sensitive and inclusive practices, as well as inform interventions aimed at reducing bias and promoting equity in mental healthcare.

The FISE method demonstrated in this research can be applied to a variety of clinical and psychological datasets, allowing for the systematic study of intersectional biases across different domains relevant to clinical psychology.

In summary, this research on extracting and validating intersectional stereotypes is highly relevant for clinical psychologists, as it provides a rigorous approach to identifying and addressing the complex biases that can impact the assessment, diagnosis, and treatment of diverse patient populations.

Wednesday, May 22, 2024

Artificial intelligence and illusions of understanding in scientific research

Messeri, L., Crockett, M.J.
Nature 627, 49–58 (2024).

Abstract

Scientists are enthusiastically imagining ways in which artificial intelligence (AI) tools might improve research. Why are AI tools so attractive and what are the risks of implementing them across the research pipeline? Here we develop a taxonomy of scientists’ visions for AI, observing that their appeal comes from promises to improve productivity and objectivity by overcoming human shortcomings. But proposed AI solutions can also exploit our cognitive limitations, making us vulnerable to illusions of understanding in which we believe we understand more about the world than we actually do. Such illusions obscure the scientific community’s ability to see the formation of scientific monocultures, in which some types of methods, questions and viewpoints come to dominate alternative approaches, making science less innovative and more vulnerable to errors. The proliferation of AI tools in science risks introducing a phase of scientific enquiry in which we produce more but understand less. By analysing the appeal of these tools, we provide a framework for advancing discussions of responsible knowledge production in the age of AI.


Here is my summary:

The article discusses the growing use of AI tools across the scientific research pipeline, including as "Oracles" to summarize literature, "Surrogates" to generate data, "Quants" to analyze complex datasets, and "Arbiters" to evaluate research. These AI visions aim to enhance scientific productivity and objectivity by overcoming human limitations.

However, the article warns that the widespread adoption of these AI tools could lead to the emergence of "scientific monocultures" - a narrowing of the research questions asked and the perspectives represented. This could create "illusions of understanding", where scientists mistakenly believe AI tools are advancing scientific knowledge when they are actually limiting it.

The article describes two types of scientific monocultures:
  1. Monocultures of knowing - where research questions and methods suited for AI dominate, marginalizing approaches that cannot be easily quantified.
  2. Monocultures of knowers - where the standpoints and experiences represented in the research are limited to what AI tools can capture.
The article argues that these monocultures make scientific understanding more vulnerable to error, bias, and missed opportunities for innovation. Raising awareness of these epistemic risks is crucial to building more robust systems of knowledge production.

Tuesday, May 21, 2024

Technology and the Situationist Challenge to Virtue Ethics

Tollon, F.
Sci Eng Ethics 30, 10 (2024).

Abstract

In this paper, I introduce a “promises and perils” framework for understanding the “soft” impacts of emerging technology, and argue for a eudaimonic conception of well-being. This eudaimonic conception of well-being, however, presupposes that we have something like stable character traits. I therefore defend this view from the “situationist challenge” and show that instead of viewing this challenge as a threat to well-being, we can incorporate it into how we think about living well with technology. Human beings are susceptible to situational influences and are often unaware of the ways that their social and technological environment influence not only their ability to do well, but even their ability to know whether they are doing well. Any theory that attempts to describe what it means for us to be doing well, then, needs to take these contextual features into account and bake them into a theory of human flourishing. By paying careful attention to these contextual factors, we can design systems that promote human flourishing.


Here is my summary:

The paper examines how technological environments can undermine the development of virtuous character traits by shaping situational factors that influence moral behavior, posing a challenge to virtue ethics.

The Situationist critique argues that character traits are less stable and predictive of behavior than virtue ethics assumes. Instead, situational factors like social pressure and environmental cues often have a stronger influence on moral actions.

The authors argue that many modern technologies, from social media to surveillance systems, create situational contexts that can override or undermine the development of virtuous character. For example, technologies that increase social monitoring and evaluation may inhibit moral courage.

They suggest that virtues like honesty, compassion, and integrity may be more difficult to cultivate in technological environments that emphasize efficiency, productivity, and conformity over moral development.

The paper calls for virtue ethicists to grapple with how emerging technologies shape moral behavior, and to develop new approaches that account for the powerful situational influences created by technological systems.

In summary, this research highlights how the Situationist critique poses a significant challenge to traditional virtue ethics by demonstrating how technological environments can undermine the development of stable moral character, requiring new ethical frameworks to address the situational factors shaping human behavior.

Monday, May 20, 2024

Making rights from wrongs: The crucial role of beliefs and justifications for the expression of aversive personality

Hilbig, B. E., et al. (2022).
Journal of experimental psychology.
General, 151(11), 2730–2755.
https://doi.org/10.1037/xge0001232

Abstract

Whereas research focusing on stable dispositions has long attributed ethically and socially aversive behavior to an array of aversive (or "dark") traits, other approaches from social-cognitive psychology and behavioral economics have emphasized the crucial role of social norms and situational justifications that allow individuals to uphold a positive self-image despite their harmful actions. We bridge these research traditions by focusing on the common core of aversive traits (the dark factor of personality [D]) and its defining aspect of involving diverse beliefs that serve to construct justifications. In particular, we theoretically specify the processes by which D is expressed in aversive behavior-namely, through diverse beliefs and the justifications they serve. In six studies (total N > 25,000) we demonstrate (a) that D involves higher subjective justifiability of those aversive behaviors that individuals high in D are more likely to engage in, (b) that D uniquely relates to diverse descriptive and injunctive beliefs-related to distrust (e.g., cynicism), hierarchy (e.g., authoritarianism), and relativism (e.g., normlessness)-that serve to justify aversive behavior, and (c) a theoretically derived pattern of moderations and mediations supporting the view that D accounts for aversive behavior because it fosters subjective justifiability thereof-at least in part owing to certain beliefs and the justifications they afford. More generally, our findings highlight the role of (social) cognitions within the conceptual definitions of personality traits and processes through which they are expressed in behavior. 


Here is a summary:

The study examines how individuals' beliefs and justifications shape the expression of aversive personality traits, such as prejudice, in social contexts.

The researchers conducted a series of experiments to investigate the role of moral beliefs and justifications in the expression of aversive personality. They found that individuals are more likely to express prejudiced views when they can justify them on moral grounds.

Specifically, the studies show that people are more willing to express prejudiced attitudes when they can frame them as upholding moral values like fairness, purity, or loyalty, rather than simply as personal preferences.

The findings suggest that the ability to construct moral justifications for prejudiced views plays a crucial role in allowing individuals to express aversive personality traits without feeling guilt or shame.

The authors argue that this process of "making rights from wrongs" through moral justification is a key mechanism underlying the expression of prejudice and other aversive personality characteristics in social settings.

In summary, this research provides important insights into how individuals' moral beliefs and justifications enable the expression of prejudiced and aversive personality traits, which has significant implications for understanding and addressing such problematic social behaviors.

Sunday, May 19, 2024

AI-Augmented Predictions: LLM Assistants Improve Human Forecasting Accuracy

P. Schoenegger, P. S. Park, E. Karger, P. E. Tetlock
arXiv:2402.07862

Abstract

Large language models (LLMs) show impressive capabilities, matching and sometimes exceeding human performance in many domains. This study explores the potential of LLMs to augment judgement in forecasting tasks. We evaluated the impact on forecasting accuracy of two GPT-4-Turbo assistants: one designed to provide high-quality advice ('superforecasting'), and the other designed to be overconfident and base-rate-neglecting. Participants (N = 991) had the option to consult their assigned LLM assistant throughout the study, in contrast to a control group that used a less advanced model (DaVinci-003) without direct forecasting support. Our preregistered analyses reveal that LLM augmentation significantly enhances forecasting accuracy by 23% across both types of assistants, compared to the control group. This improvement occurs despite the superforecasting assistant's higher accuracy in predictions, indicating the augmentation's benefit is not solely due to model prediction accuracy. Exploratory analyses showed a pronounced effect in one forecasting item, without which we find that the superforecasting assistant increased accuracy by 43%, compared with 28% for the biased assistant. We further examine whether LLM augmentation disproportionately benefits less skilled forecasters, degrades the wisdom-of-the-crowd by reducing prediction diversity, or varies in effectiveness with question difficulty. Our findings do not consistently support these hypotheses. Our results suggest that access to an LLM assistant, even a biased one, can be a helpful decision aid in cognitively demanding tasks where the answer is not known at the time of interaction.


This paper investigates the use of large language models (LLMs) like GPT-4 as an augmentation tool to improve human forecasting accuracy on various questions about future events. The key findings from their preregistered study with 991 participants are:
  1. LLM augmentation, both with a "superforecasting" prompt and a biased prompt, significantly improved individual forecasting accuracy by around 23% compared to a control group using a simpler language model without direct forecasting support.
  2. There was no statistically significant difference in accuracy between the superforecasting and biased LLM augmentation conditions, despite the superforecasting model providing more accurate solo forecasts initially.
  3. The effect of LLM augmentation did not differ significantly between high and low-skilled forecasters.
  4. Results on whether LLM augmentation improved or degraded aggregate forecast accuracy were mixed across preregistered and exploratory analyses.
  5. LLM augmentation did not have a significantly different effect on easier versus harder forecasting questions in preregistered analyses.
The paper argues that LLM augmentation can serve as a decision aid to improve human forecasting on novel questions, even when LLMs perform poorly at that task alone. However, the mechanisms behind these improvements require further study.

Saturday, May 18, 2024

Stoicism (as Emotional Compression) Is Emotional Labor

TĂ¡Ă­wĂ², O. (2020).
Feminist Philosophy Quarterly, 6(2).

Abstract

The criticism of “traditional,” “toxic,” or “patriarchal” masculinity in both academic and popular venues recognizes that there is some sense in which the character traits and tendencies that are associated with masculinity are structurally connected to oppressive, gendered social practices and patriarchal social structures. One important theme of criticism centers on the gender distribution of emotional labor, generally speaking, but this criticism is also particularly meaningful in the context of heterosexual romantic relationships. I begin with the premise that there is a gendered and asymmetrical distribution in how much emotional labor is performed, but I also consider that there might be meaningful and informative distinctions in what kind of emotional labor is characteristically performed by different genders. Specifically, I argue that the social norms around stoicism and restricted emotional expression are masculine-coded forms of emotional labor, and that they are potentially prosocial. Responding to structural and interpersonal asymmetries of emotional labor could well involve supplementing or better cultivating this aspect of male socialization rather than discarding it.

Here is my summary:

TĂ¡Ă­wĂ² argues that the social norms surrounding stoicism, particularly the restriction of emotional expression, function as a gendered form of emotional labor.

Key Points:

Stoicism and Emotional Labor: The article reconceptualizes stoicism, traditionally associated with emotional resilience, as a type of emotional labor. This reframing highlights the effort involved in suppressing emotions to conform to social expectations of masculinity.

Masculinity and Emotional Labor: TĂ¡Ă­wĂ² emphasizes the connection between stoicism and masculine norms. Men are socialized to restrict emotional expression, which can be seen as a form of emotional labor with potential benefits for social order.

Gender and Emotional Labor Distribution: The author acknowledges the unequal distribution of emotional labor across genders. While stoicism might be a specific form of emotional labor for men, women often perform different types of emotional labor in society.

Potential Benefits: TĂ¡Ă­wĂ² recognizes that stoicism, as emotional labor, can have positive aspects. It can promote social stability and emotional resilience in individuals.

This article offers a critical perspective on stoicism by linking it to emotional labor and masculinity. It prompts further discussion on gendered expectations surrounding emotions and the potential benefits and drawbacks of stoicism in contemporary society.

Friday, May 17, 2024

Moral universals: A machine-reading analysis of 256 societies

Alfano, M., Cheong, M., & Curry, O. S. (2024).
Heliyon, 10(6).
doi.org/10.1016/j.heliyon.2024.e25940 

Abstract

What is the cross-cultural prevalence of the seven moral values posited by the theory of “morality-as-cooperation”? Previous research, using laborious hand-coding of ethnographic accounts of ethics from 60 societies, found examples of most of the seven morals in most societies, and observed these morals with equal frequency across cultural regions. Here we replicate and extend this analysis by developing a new Morality-as-Cooperation Dictionary (MAC-D) and using Linguistic Inquiry and Word Count (LIWC) to machine-code ethnographic accounts of morality from an additional 196 societies (the entire Human Relations Area Files, or HRAF, corpus). Again, we find evidence of most of the seven morals in most societies, across all cultural regions. The new method allows us to detect minor variations in morals across region and subsistence strategy. And we successfully validate the new machine-coding against the previous hand-coding. In light of these findings, MAC-D emerges as a theoretically-motivated, comprehensive, and validated tool for machine-reading moral corpora. We conclude by discussing the limitations of the current study, as well as prospects for future research.

Significance statement

The empirical study of morality has hitherto been conducted primarily in WEIRD contexts and with living participants. This paper addresses both of these shortcomings by examining the global anthropological record. In addition, we develop a novel methodological tool, the morality-as-cooperation dictionary, which makes it possible to use natural language processing to extract a moral signal from text. We find compelling evidence that the seven moral elements posited by the morality-as-cooperation hypothesis are documented in the anthropological record in all regions of the world and among all subsistence strategies. Furthermore, differences in moral emphasis between different types of cultures tend to be non-significant and small when significant. This is evidence for moral universalism.


Here is my summary:

The study aimed to investigate potential moral universals across human societies by analyzing a large dataset of ethnographic texts describing the norms and practices of 256 societies from around the world. The researchers used machine learning and natural language processing techniques to identify recurring concepts and themes related to morality across the texts.

Some key findings:

1. Seven potential moral universals were identified as being very widespread across societies:
            Fairness/reciprocity
            Harm/care
            Deference to authorities/respect
            Loyalty to the in-group
            Purity/sanctity
            Liberty/oppression
            Ownership/property rights

2. However, there was also substantial variation in how these principles were interpreted and prioritized across cultures.

3. Certain potential universals like harm/care and fairness were more universally condemned when violations impacted one's own group versus other groups.

4. Societies' mobility, population density, and reliance on agriculture or animal husbandry seemed to influence the relative importance placed on different moral principles.

The authors argue that while there do appear to be some common moral foundations widespread across societies, there is also substantial cultural variation in how these are expressed and prioritized. They suggest morality emerges from an interaction of innate psychological foundations and cultural evolutionary processes.

Thursday, May 16, 2024

What Can State Medical Boards Do to Effectively Address Serious Ethical Violations?

McIntosh, T., Pendo, E., et al. (2023).
The Journal of law, medicine & ethics
 51(4), 941–953.
https://doi.org/10.1017/jme.2024.6

Abstract

State Medical Boards (SMBs) can take severe disciplinary actions (e.g., license revocation or suspension) against physicians who commit egregious wrongdoing in order to protect the public. However, there is noteworthy variability in the extent to which SMBs impose severe disciplinary action. In this manuscript, we present and synthesize a subset of 11 recommendations based on findings from our team’s larger consensus-building project that identified a list of 56 policies and legal provisions SMBs can use to better protect patients from egregious wrongdoing by physicians.

From the Conclusion

There is a growing awareness of the role SMBs have to play in protecting the public from egregious wrongdoing by physicians. Too many cases of patient abuse involve a large number of victims across a long period of time. SMBs are often in a position to change these circumstances when they establish and consistently utilize and enforce policies, procedures, and resources that are needed to impose severe disciplinary actions in a timely and fair manner. Many improvements in board processes require action by state legislatures, changes to state statutes, and increases to SMB budgets. While most of the actions we advocate in this paper would be facilitated and enhanced by existing or new statutes or regulations, and more frequently increased budgets, most of them can be at least partially implemented independently with modest budgetary impact in the short-term. The recommendations expanded upon in this paper are the result of input from individuals of various roles and expertise, including members of the FSMB, SMB members, health lawyers, patient advocates, and other healthcare leaders. Future efforts may wish to engage an even wider range of stakeholders on these topics, possibly with a greater emphasis on engaging patient and consumer advocates.


Here is my summary:

State medical boards can take several steps to effectively address serious ethical violations by physicians:
  1. Increase the rate of serious disciplinary actions: Data shows there is wide variation in the rate of serious disciplinary actions taken by state medical boards, with some boards being overly lax. Boards should prioritize public protection over protecting the livelihoods of problematic physicians.
  2. Improve board composition and independence: Boards should have more public members and be independent from state medical societies to reduce conflicts of interest. This can lead to more rigorous investigations and appropriate disciplinary actions.
  3. Enhance data collection and sharing: The National Practitioner Data Bank should collect and share more detailed data on physician misconduct, while protecting sensitive information. This can help identify patterns and high-risk physicians.
  4. Mandate reporting of misconduct: State laws should require physicians to report suspected sexual misconduct or other serious ethical violations by colleagues. Failure to report should result in disciplinary action.
  5. Increase transparency and public accountability: Medical boards should publicly report on disciplinary actions taken and the reasons for them, to improve transparency and public trust.
In summary, state medical boards need to take a more proactive and rigorous approach to investigating and disciplining physicians who commit serious ethical violations, in order to better protect patient safety and the public interest.

Wednesday, May 15, 2024

When should a computer decide? Judicial decision-making in the age of automation, algorithms and generative artificial intelligence

J. Morison and T. McInerney
In S Turenne and M Moussa (eds)
Research Handbook on Judging and the
Judiciary, Edward Elgar Routledge forthcoming 2024.

Abstract

This contribution explores what the activity of judging actually involves and whether it might be replaced by algorithmic technologies, including Large Language Models such as ChatGPT. This involves investigating how algorithmic judging systems operate and might develop, as well as exploring the current limits on using AI in coming to judgment. While it may be accepted that some routine decision can be safely made by machines, others clearly cannot and the focus here is on exploring where and why a decision requires human involvement. This involves considering a range of features centrally involved in judging that may not be capable of being adequately captured by machines. Both the role of judges and wider considerations about the nature and purpose of the legal system are reviewed to support the conclusion that while technology may assist judges, it cannot fully replace them.

Introduction

There is a growing realisation that we may have given away too much to new technologies in general, and to new digital technologies based on algorithms and artificial intelligence (AI) in particular, not to mention the large corporations who largely control these systems. Certainly, as in many other areas, the latest iterations of the tech revolution in the form of ChatGPT and other large language models (LLMs) are
disrupting approaches within law and legal practice, even producing legal judgements.1 This contribution considers a fundamental question about when it is acceptable to use AI in what might be thought of as the essentially human activity of judging disputes. It also explores what ‘acceptable’ means in this context, and tries to establish if there is a bright line where the undoubted value of AI, and the various advantages this may bring, come at too high a cost in terms of what may be lost when the human element is downgraded or eliminated. Much of this involves investigating how algorithmic judging systems operate and might develop, as well as exploring the current limits on using AI in coming to judgment. There are of course some technical arguments here, but the main focus is on what ‘judgment’ in a legal context actually
involves, and what it might not be possible to reproduce satisfactorily in a machine led approach. It is in answering this question that this contribution addresses the themes of this research handbook by attempting to excavate the nature and character of judicial decision-making and exploring the future for trustworthy and accountable judging in an algorithmically driven future. 

Tuesday, May 14, 2024

New California Court for the Mentally Ill Tests a State’s Liberal Values

Tim Arango
The New York Times
Originally posted 21 March 24

Here is an excerpt:

The new initiative, called CARE Court — for Community Assistance, Recovery and Empowerment — is a cornerstone of California’s latest campaign to address the intertwined crises of mental illness and homelessness on the streets of communities up and down the state.

Another piece of the effort is Proposition 1, a ballot measure championed by Gov. Gavin Newsom and narrowly approved by California voters this month. It authorizes $6.4 billion in bonds to pay for thousands of treatment beds and for more housing for the homeless — resources that could help pay for treatment plans put in place by CARE Court judges.

And Mr. Newsom, a Democrat in his second term, has not only promised more resources for treatment but has pledged to make it easier to compel treatment, arguing that civil liberties concerns have left far too many people without the care they need.

So when Ms. Collette went to court, she was surprised, and disappointed, to learn that the judge would not be able to mandate treatment for Tamra.

Instead, it is the treatment providers who would be under court order — to ensure that medication, therapy and housing are available in a system that has long struggled to reliably provide such services.

“I was hoping it would have a little more punch to it,” Ms. Collette said. “I thought it would have a little more power to order them into some kind of care.”


Here is a summary:

California's new CARE Court (Community Assistance, Recovery and Empowerment) is a court system designed to address the issues of mental illness and homelessness. It aims to provide court-ordered care plans for individuals struggling with severe mental illness who are unable to care for themselves. This initiative tests the state's liberal values by balancing individual liberty with the need for intervention to help those in crisis.

Monday, May 13, 2024

Ethical Considerations When Confronted by Racist Patients

Charles Dike
Psychiatric News
Originally published 26 Feb 24

Here is an excerpt:

Abuse of psychiatrists, mostly verbal but sometimes physical, is common in psychiatric treatment, especially on inpatient units. For psychiatrists trained decades ago, experiencing verbal abuse and name calling from patients—and even senior colleagues and teachers—was the norm. The abuse began in medical school, with unconscionable work hours followed by callous disregard of students’ concerns and disparaging statements suggesting the students were too weak or unfit to be doctors.

This abuse continued into specialty training and practice. It was largely seen as a necessary evil of attaining the privilege of becoming a doctor and treating patients whose uncivil behaviors can be excused on account of their ill health. Doctors were supposed to rise above those indignities, focus on the task at hand, and get the patients better in line with our core ethical principles that place caring for the patient above all else. There was no room for discussion or acknowledgement of the doctors’ underlying life experiences, including past trauma, and how patients’ behavior would affect doctors.

Moreover, even in recent times, racial slurs or attacks against physicians of color were not recognized as abuse by the dominant group of doctors; the affected physicians who complained were dismissed as being too sensitive or worse. Some physicians, often not of color, have explained a manic patient’s racist comments as understandable in the context of disinhibition and poor judgment, which are cardinal symptoms of mania, and they are surprised that physicians of color are not so understanding.


Here is a summary:

This article explores the ethical dilemma healthcare providers face when treating patients who express racist views. It acknowledges the provider's obligation to care for the patient's medical needs, while also considering the emotional toll of racist remarks on both the provider and other staff members.

The article discusses the importance of assessing the urgency of the patient's medical condition and their mental capacity. It explores the option of setting boundaries or termination of treatment in extreme cases, while also acknowledging the potential benefits of attempting a dialogue about the impact of prejudice.

Sunday, May 12, 2024

How patients experience respect in healthcare: findings from a qualitative study among multicultural women living with HIV

Fernandez, S.B., Ahmad, A., Beach, M.C. et al.
BMC Med Ethics 25, 39 (2024).

Abstract

Background
Respect is essential to providing high quality healthcare, particularly for groups that are historically marginalized and stigmatized. While ethical principles taught to health professionals focus on patient autonomy as the object of respect for persons, limited studies explore patients’ views of respect. The purpose of this study was to explore the perspectives of a multiculturally diverse group of low-income women living with HIV (WLH) regarding their experience of respect from their medical physicians.

Methods
We analyzed 57 semi-structured interviews conducted at HIV case management sites in South Florida as part of a larger qualitative study that explored practices facilitating retention and adherence in care. Women were eligible to participate if they identified as African American (n = 28), Hispanic/Latina (n = 22), or Haitian (n = 7). They were asked to describe instances when they were treated with respect by their medical physicians. Interviews were conducted by a fluent research interviewer in either English, Spanish, or Haitian Creole, depending on participant’s language preference. Transcripts were translated, back-translated and reviewed in entirety for any statements or comments about “respect.” After independent coding by 3 investigators, we used a consensual thematic analysis approach to determine themes.

Results
Results from this study grouped into two overarching classifications: respect manifested in physicians’ orientation towards the patient (i.e., interpersonal behaviors in interactions) and respect in medical professionalism (i.e., clinic procedures and practices). Four main themes emerged regarding respect in provider’s orientation towards the patient: being treated as a person, treated as an equal, treated without blame or prejudice, and treated with concern/emotional support. Two main themes emerged regarding respect as evidenced in medical professionalism: physician availability and considerations of privacy.

Conclusions
Findings suggest a more robust conception of what ‘respect for persons’ entails in medical ethics for a diverse group of low-income women living with HIV. Findings have implications for broadening areas of focus of future bioethics education, training, and research to include components of interpersonal relationship development, communication, and clinic procedures. We suggest these areas of training may increase respectful medical care experiences and potentially serve to influence persistent and known social and structural determinants of health through provider interactions and health care delivery.


Here is my summary:

The study explored how multicultural women living with HIV experience respectful treatment in healthcare settings.  Researchers found that these women define respect in healthcare as feeling like a person, not just a disease statistic, and being treated as an equal partner in their care. This includes being listened to, having their questions answered, and being involved in decision-making.  The study also highlighted the importance of providers avoiding judgment and blame, and showing concern for the emotional well-being of patients.

Saturday, May 11, 2024

Can Robots have Personal Identity?

Alonso, M.
Int J of Soc Robotics 15, 211–220 (2023).
https://doi.org/10.1007/s12369-022-00958-y

Abstract

This article attempts to answer the question of whether robots can have personal identity. In recent years, and due to the numerous and rapid technological advances, the discussion around the ethical implications of Artificial Intelligence, Artificial Agents or simply Robots, has gained great importance. However, this reflection has almost always focused on problems such as the moral status of these robots, their rights, their capabilities or the qualities that these robots should have to support such status or rights. In this paper I want to address a question that has been much less analyzed but which I consider crucial to this discussion on robot ethics: the possibility, or not, that robots have or will one day have personal identity. The importance of this question has to do with the role we normally assign to personal identity as central to morality. After posing the problem and exposing this relationship between identity and morality, I will engage in a discussion with the recent literature on personal identity by showing in what sense one could speak of personal identity in beings such as robots. This is followed by a discussion of some key texts in robot ethics that have touched on this problem, finally addressing some implications and possible objections. I finally give the tentative answer that robots could potentially have personal identity, given other cases and what we empirically know about robots and their foreseeable future.


The article explores the idea of personal identity in robots. It acknowledges that this is a complex question tied to how we define "personhood" itself.

There are arguments against robots having personal identity, often focusing on the biological and experiential differences between humans and machines.

On the other hand, the article highlights that robots can develop and change over time, forming a narrative of self much like humans do. They can also build relationships with people, suggesting a form of "relational personal identity".

The article concludes that even if a robot's identity is different from a human's, it could still be considered a true identity, deserving of consideration. This opens the door to discussions about the ethical treatment of advanced AI.

Friday, May 10, 2024

Generative artificial intelligence and scientific publishing: urgent questions, difficult answers

J. Bagenal
The Lancet
March 06, 2024

Abstract

Azeem Azhar describes, in Exponential: Order and Chaos in an Age of Accelerating Technology, how human society finds it hard to imagine or process exponential growth and change and is repeatedly caught out by this phenomenon. Whether it is the exponential spread of a virus or the exponential spread of a new technology, such as the smartphone, people consistently underestimate its impact.  Whether it is the exponential spread of a virus or the exponential spread of a new technology, such as the smartphone, people consistently underestimate its impact. Azhar argues that an exponential gap has developed between technological progress and the pace at which institutions are evolving to deal with that progress. This is the case in scientific publishing with generative artificial intelligence (AI) and large language models (LLMs). There is guidance on the use of generative AI from organisations such as the International Committee of Medical Journal Editors. But across scholarly publishing such guidance is inconsistent. For example, one study of the 100 top global academic publishers and scientific journals found only 24% of academic publishers had guidance on the use of generative AI, whereas 87% of scientific journals provided such guidance. For those with guidance, 75% of publishers and 43% of journals had specific criteria for the disclosure of use of generative AI. In their book The Coming Wave, Mustafa Suleyman, co-founder and CEO of Inflection AI, and writer Michael Bhaskar warn that society is unprepared for the changes that AI will bring. They describe a person's or group's reluctance to confront difficult, uncertain change as the “pessimism aversion trap”. For journal editors and scientific publishers today, this is a dangerous trap to fall into. All the signs about generative AI in scientific publishing suggest things are not going to be ok.


From behind the paywall.

In 2023, Springer Nature became the first scientific publisher to create a new academic book by empowering authors to use generative Al. Researchers have shown that scientists found it difficult to distinguish between a human generated scientific abstract and one created by generative Al. Noam Chomsky has argued that generative Al undermines education and is nothing more than high-tech plagiarism, and many feel similarly about Al models trained on work without upholding copyright. Plagiarism is a problem in scientific publishing, but those concerned with research integrity are also considering a post- plagiarism world, in which hybrid human-Al writing becomes the norm and differentiating between the two becomes pointless. In the ideal scenario, human creativity is enhanced, language barriers disappear, and humans relinquish control but not responsibility.  Such an ideal scenario would be good.  But there are two urgent questions for scientific publishing.

First, how can scientific publishers and journal editors assure themselves that the research they are seeing is real? Researchers have used generative Al to create convincing fake clinical trial datasets to support a false scientific hypothesis that could only be identified when the raw data were scrutinised in detail by an expert. Papermills (nefarious businesses that generate poor or fake scientific studies and sell authorship) are a huge problem and contribute to the escalating number of research articles that are retracted by scientific publishers. The battle thus far has been between papermills becoming more sophisticated in their fabrication and ways of manipulating the editorial process and scientific publishers trying to find ways to detect and prevent these practices. Generative Al will turbocharge that race, but it might also break the papermill business model. When rogue academics use generative Al to fabricate datasets, they will not need to pay a papermill and will generate sham papers themselves. Fake studies will exponentially surge and nobody is doing enough to stop this inevitability.

Thursday, May 9, 2024

DNA Tests are Uncovering the True Prevalence of Incest

Sarah Zhang
The Atlantic
Originally poste 18 MAR 24

Here is an excerpt:

In 1975, a psychiatric textbook put the frequency of incest at one in a million. In the 1980s, feminist scholars argued, based on the testimonies of victims, that incest was far more common than recognized, and in recent years, DNA has offered a new kind of biological proof. Widespread genetic testing is uncovering case after secret case of children born to close biological relatives-providing an unprecedented accounting of incest in modern society.

The geneticist Jim Wilson, at the University of Edinburgh, was shocked by the frequency he found in the U.K. Biobank, an anonymized research database: One in 7,000 people, according to his unpublished analysis, was born to parents who were first-degree relatives-a brother and a sister or a parent and a child. "That's way, way more than I think many people would ever imagine," he told me. And this number is just a floor: It reflects only the cases that resulted in pregnancy, that did not end in miscarriage or abortion, and that led to the birth of a child who grew into an adult who volunteered for a research study.
Most of the people affected may never know about their parentage, but these days, many are stumbling into the truth after AncestryDNA and 23andMe tests.

Neither AncestryDNA nor 23andMe informs customers about incest directly, so the thousand-plus cases [genetic genealogist CeCe Moore] knows of all come from the tiny proportion of testers who investigated further. This meant, for example, uploading their DNA profiles to a third-party genealogy site to analyze what are known as "runs of homozygosity," or ROH: long stretches where the DNA inherited from one's mother and father are identical. For a while, one popular genealogy site instructed anyone who found high ROH to contact Moore. She would call them, one by one, to explain the jargon's explosive meaning. Unwittingly, she became the keeper of what might be the world's largest database of people born out of incest.

In the overwhelming majority of cases, Moore told me, the parents are a father and a daughter or an older brother and a younger sister, meaning a child's existence was likely evidence of sexual abuse. She had no obvious place to send people reeling from such revelations, and she was not herself a trained therapist.


Here is a summary: 

The article "DNA Tests Are Uncovering the True Prevalence of Incest" explores how at-home DNA test kits like AncestryDNA and 23andMe are revealing that children born through incest are more common than previously thought. The story follows Steve Edsel, a man in his 40s who discovered that he is the child of two first-degree relatives: a sister and her older brother. The piece delves into the emotional journey of individuals like Steve who uncover shocking truths about their biological parents through DNA testing, shedding light on a sensitive and taboo topic prevalent across cultures. The narrative intertwines personal stories of discovery, truth, and belonging with statistical insights, highlighting the complexities and challenges faced by those who uncover such familial secrets.

Wednesday, May 8, 2024

AI image generators often give racist and sexist results: can they be fixed?

Ananya
Nature.com
Originally posted 19 March 2024

In 2022, Pratyusha Ria Kalluri, a graduate student in artificial intelligence (AI) at Stanford University in California, found something alarming in image-generating AI programs. When she prompted a popular tool for ‘a photo of an American man and his house’, it generated an image of a pale-skinned person in front of a large, colonial-style home. When she asked for ‘a photo of an African man and his fancy house’, it produced an image of a dark-skinned person in front of a simple mud house — despite the word ‘fancy’.

After some digging, Kalluri and her colleagues found that images generated by the popular tools Stable Diffusion, released by the firm Stability AI, and DALL·E, from OpenAI, overwhelmingly resorted to common stereotypes, such as associating the word ‘Africa’ with poverty, or ‘poor’ with dark skin tones. The tools they studied even amplified some biases. For example, in images generated from prompts asking for photos of people with certain jobs, the tools portrayed almost all housekeepers as people of colour and all flight attendants as women, and in proportions that are much greater than the demographic reality (see ‘Amplified stereotypes’)1. Other researchers have found similar biases across the board: text-to-image generative AI models often produce images that include biased and stereotypical traits related to gender, skin colour, occupations, nationalities and more.


Here is my summary:

AI image generators, like Stable Diffusion and DALL-E, have been found to perpetuate racial and gender stereotypes, displaying biased results. These generators tend to default to outdated Western stereotypes, amplifying clichés and biases in their images. Efforts to detoxify AI image tools have been made, focusing on filtering data sets and refining development stages. However, despite improvements, these tools still struggle with accuracy and inclusivity. Google's Gemini AI image generator faced criticism for inaccuracies in historical image depictions, overcompensating for diversity and sometimes generating offensive or inaccurate results. The article highlights the challenges of fixing the biases in AI image generators and the need to address societal practices that contribute to these issues.