Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, May 19, 2024

AI-Augmented Predictions: LLM Assistants Improve Human Forecasting Accuracy

P. Schoenegger, P. S. Park, E. Karger, P. E. Tetlock
arXiv:2402.07862

Abstract

Large language models (LLMs) show impressive capabilities, matching and sometimes exceeding human performance in many domains. This study explores the potential of LLMs to augment judgement in forecasting tasks. We evaluated the impact on forecasting accuracy of two GPT-4-Turbo assistants: one designed to provide high-quality advice ('superforecasting'), and the other designed to be overconfident and base-rate-neglecting. Participants (N = 991) had the option to consult their assigned LLM assistant throughout the study, in contrast to a control group that used a less advanced model (DaVinci-003) without direct forecasting support. Our preregistered analyses reveal that LLM augmentation significantly enhances forecasting accuracy by 23% across both types of assistants, compared to the control group. This improvement occurs despite the superforecasting assistant's higher accuracy in predictions, indicating the augmentation's benefit is not solely due to model prediction accuracy. Exploratory analyses showed a pronounced effect in one forecasting item, without which we find that the superforecasting assistant increased accuracy by 43%, compared with 28% for the biased assistant. We further examine whether LLM augmentation disproportionately benefits less skilled forecasters, degrades the wisdom-of-the-crowd by reducing prediction diversity, or varies in effectiveness with question difficulty. Our findings do not consistently support these hypotheses. Our results suggest that access to an LLM assistant, even a biased one, can be a helpful decision aid in cognitively demanding tasks where the answer is not known at the time of interaction.


This paper investigates the use of large language models (LLMs) like GPT-4 as an augmentation tool to improve human forecasting accuracy on various questions about future events. The key findings from their preregistered study with 991 participants are:
  1. LLM augmentation, both with a "superforecasting" prompt and a biased prompt, significantly improved individual forecasting accuracy by around 23% compared to a control group using a simpler language model without direct forecasting support.
  2. There was no statistically significant difference in accuracy between the superforecasting and biased LLM augmentation conditions, despite the superforecasting model providing more accurate solo forecasts initially.
  3. The effect of LLM augmentation did not differ significantly between high and low-skilled forecasters.
  4. Results on whether LLM augmentation improved or degraded aggregate forecast accuracy were mixed across preregistered and exploratory analyses.
  5. LLM augmentation did not have a significantly different effect on easier versus harder forecasting questions in preregistered analyses.
The paper argues that LLM augmentation can serve as a decision aid to improve human forecasting on novel questions, even when LLMs perform poorly at that task alone. However, the mechanisms behind these improvements require further study.

Saturday, May 18, 2024

Stoicism (as Emotional Compression) Is Emotional Labor

Táíwò, O. (2020).
Feminist Philosophy Quarterly, 6(2).

Abstract

The criticism of “traditional,” “toxic,” or “patriarchal” masculinity in both academic and popular venues recognizes that there is some sense in which the character traits and tendencies that are associated with masculinity are structurally connected to oppressive, gendered social practices and patriarchal social structures. One important theme of criticism centers on the gender distribution of emotional labor, generally speaking, but this criticism is also particularly meaningful in the context of heterosexual romantic relationships. I begin with the premise that there is a gendered and asymmetrical distribution in how much emotional labor is performed, but I also consider that there might be meaningful and informative distinctions in what kind of emotional labor is characteristically performed by different genders. Specifically, I argue that the social norms around stoicism and restricted emotional expression are masculine-coded forms of emotional labor, and that they are potentially prosocial. Responding to structural and interpersonal asymmetries of emotional labor could well involve supplementing or better cultivating this aspect of male socialization rather than discarding it.

Here is my summary:

Táíwò argues that the social norms surrounding stoicism, particularly the restriction of emotional expression, function as a gendered form of emotional labor.

Key Points:

Stoicism and Emotional Labor: The article reconceptualizes stoicism, traditionally associated with emotional resilience, as a type of emotional labor. This reframing highlights the effort involved in suppressing emotions to conform to social expectations of masculinity.

Masculinity and Emotional Labor: Táíwò emphasizes the connection between stoicism and masculine norms. Men are socialized to restrict emotional expression, which can be seen as a form of emotional labor with potential benefits for social order.

Gender and Emotional Labor Distribution: The author acknowledges the unequal distribution of emotional labor across genders. While stoicism might be a specific form of emotional labor for men, women often perform different types of emotional labor in society.

Potential Benefits: Táíwò recognizes that stoicism, as emotional labor, can have positive aspects. It can promote social stability and emotional resilience in individuals.

This article offers a critical perspective on stoicism by linking it to emotional labor and masculinity. It prompts further discussion on gendered expectations surrounding emotions and the potential benefits and drawbacks of stoicism in contemporary society.

Friday, May 17, 2024

Moral universals: A machine-reading analysis of 256 societies

Alfano, M., Cheong, M., & Curry, O. S. (2024).
Heliyon, 10(6).
doi.org/10.1016/j.heliyon.2024.e25940 

Abstract

What is the cross-cultural prevalence of the seven moral values posited by the theory of “morality-as-cooperation”? Previous research, using laborious hand-coding of ethnographic accounts of ethics from 60 societies, found examples of most of the seven morals in most societies, and observed these morals with equal frequency across cultural regions. Here we replicate and extend this analysis by developing a new Morality-as-Cooperation Dictionary (MAC-D) and using Linguistic Inquiry and Word Count (LIWC) to machine-code ethnographic accounts of morality from an additional 196 societies (the entire Human Relations Area Files, or HRAF, corpus). Again, we find evidence of most of the seven morals in most societies, across all cultural regions. The new method allows us to detect minor variations in morals across region and subsistence strategy. And we successfully validate the new machine-coding against the previous hand-coding. In light of these findings, MAC-D emerges as a theoretically-motivated, comprehensive, and validated tool for machine-reading moral corpora. We conclude by discussing the limitations of the current study, as well as prospects for future research.

Significance statement

The empirical study of morality has hitherto been conducted primarily in WEIRD contexts and with living participants. This paper addresses both of these shortcomings by examining the global anthropological record. In addition, we develop a novel methodological tool, the morality-as-cooperation dictionary, which makes it possible to use natural language processing to extract a moral signal from text. We find compelling evidence that the seven moral elements posited by the morality-as-cooperation hypothesis are documented in the anthropological record in all regions of the world and among all subsistence strategies. Furthermore, differences in moral emphasis between different types of cultures tend to be non-significant and small when significant. This is evidence for moral universalism.


Here is my summary:

The study aimed to investigate potential moral universals across human societies by analyzing a large dataset of ethnographic texts describing the norms and practices of 256 societies from around the world. The researchers used machine learning and natural language processing techniques to identify recurring concepts and themes related to morality across the texts.

Some key findings:

1. Seven potential moral universals were identified as being very widespread across societies:
            Fairness/reciprocity
            Harm/care
            Deference to authorities/respect
            Loyalty to the in-group
            Purity/sanctity
            Liberty/oppression
            Ownership/property rights

2. However, there was also substantial variation in how these principles were interpreted and prioritized across cultures.

3. Certain potential universals like harm/care and fairness were more universally condemned when violations impacted one's own group versus other groups.

4. Societies' mobility, population density, and reliance on agriculture or animal husbandry seemed to influence the relative importance placed on different moral principles.

The authors argue that while there do appear to be some common moral foundations widespread across societies, there is also substantial cultural variation in how these are expressed and prioritized. They suggest morality emerges from an interaction of innate psychological foundations and cultural evolutionary processes.

Thursday, May 16, 2024

What Can State Medical Boards Do to Effectively Address Serious Ethical Violations?

McIntosh, T., Pendo, E., et al. (2023).
The Journal of law, medicine & ethics
 51(4), 941–953.
https://doi.org/10.1017/jme.2024.6

Abstract

State Medical Boards (SMBs) can take severe disciplinary actions (e.g., license revocation or suspension) against physicians who commit egregious wrongdoing in order to protect the public. However, there is noteworthy variability in the extent to which SMBs impose severe disciplinary action. In this manuscript, we present and synthesize a subset of 11 recommendations based on findings from our team’s larger consensus-building project that identified a list of 56 policies and legal provisions SMBs can use to better protect patients from egregious wrongdoing by physicians.

From the Conclusion

There is a growing awareness of the role SMBs have to play in protecting the public from egregious wrongdoing by physicians. Too many cases of patient abuse involve a large number of victims across a long period of time. SMBs are often in a position to change these circumstances when they establish and consistently utilize and enforce policies, procedures, and resources that are needed to impose severe disciplinary actions in a timely and fair manner. Many improvements in board processes require action by state legislatures, changes to state statutes, and increases to SMB budgets. While most of the actions we advocate in this paper would be facilitated and enhanced by existing or new statutes or regulations, and more frequently increased budgets, most of them can be at least partially implemented independently with modest budgetary impact in the short-term. The recommendations expanded upon in this paper are the result of input from individuals of various roles and expertise, including members of the FSMB, SMB members, health lawyers, patient advocates, and other healthcare leaders. Future efforts may wish to engage an even wider range of stakeholders on these topics, possibly with a greater emphasis on engaging patient and consumer advocates.


Here is my summary:

State medical boards can take several steps to effectively address serious ethical violations by physicians:
  1. Increase the rate of serious disciplinary actions: Data shows there is wide variation in the rate of serious disciplinary actions taken by state medical boards, with some boards being overly lax. Boards should prioritize public protection over protecting the livelihoods of problematic physicians.
  2. Improve board composition and independence: Boards should have more public members and be independent from state medical societies to reduce conflicts of interest. This can lead to more rigorous investigations and appropriate disciplinary actions.
  3. Enhance data collection and sharing: The National Practitioner Data Bank should collect and share more detailed data on physician misconduct, while protecting sensitive information. This can help identify patterns and high-risk physicians.
  4. Mandate reporting of misconduct: State laws should require physicians to report suspected sexual misconduct or other serious ethical violations by colleagues. Failure to report should result in disciplinary action.
  5. Increase transparency and public accountability: Medical boards should publicly report on disciplinary actions taken and the reasons for them, to improve transparency and public trust.
In summary, state medical boards need to take a more proactive and rigorous approach to investigating and disciplining physicians who commit serious ethical violations, in order to better protect patient safety and the public interest.

Wednesday, May 15, 2024

When should a computer decide? Judicial decision-making in the age of automation, algorithms and generative artificial intelligence

J. Morison and T. McInerney
In S Turenne and M Moussa (eds)
Research Handbook on Judging and the
Judiciary, Edward Elgar Routledge forthcoming 2024.

Abstract

This contribution explores what the activity of judging actually involves and whether it might be replaced by algorithmic technologies, including Large Language Models such as ChatGPT. This involves investigating how algorithmic judging systems operate and might develop, as well as exploring the current limits on using AI in coming to judgment. While it may be accepted that some routine decision can be safely made by machines, others clearly cannot and the focus here is on exploring where and why a decision requires human involvement. This involves considering a range of features centrally involved in judging that may not be capable of being adequately captured by machines. Both the role of judges and wider considerations about the nature and purpose of the legal system are reviewed to support the conclusion that while technology may assist judges, it cannot fully replace them.

Introduction

There is a growing realisation that we may have given away too much to new technologies in general, and to new digital technologies based on algorithms and artificial intelligence (AI) in particular, not to mention the large corporations who largely control these systems. Certainly, as in many other areas, the latest iterations of the tech revolution in the form of ChatGPT and other large language models (LLMs) are
disrupting approaches within law and legal practice, even producing legal judgements.1 This contribution considers a fundamental question about when it is acceptable to use AI in what might be thought of as the essentially human activity of judging disputes. It also explores what ‘acceptable’ means in this context, and tries to establish if there is a bright line where the undoubted value of AI, and the various advantages this may bring, come at too high a cost in terms of what may be lost when the human element is downgraded or eliminated. Much of this involves investigating how algorithmic judging systems operate and might develop, as well as exploring the current limits on using AI in coming to judgment. There are of course some technical arguments here, but the main focus is on what ‘judgment’ in a legal context actually
involves, and what it might not be possible to reproduce satisfactorily in a machine led approach. It is in answering this question that this contribution addresses the themes of this research handbook by attempting to excavate the nature and character of judicial decision-making and exploring the future for trustworthy and accountable judging in an algorithmically driven future. 

Tuesday, May 14, 2024

New California Court for the Mentally Ill Tests a State’s Liberal Values

Tim Arango
The New York Times
Originally posted 21 March 24

Here is an excerpt:

The new initiative, called CARE Court — for Community Assistance, Recovery and Empowerment — is a cornerstone of California’s latest campaign to address the intertwined crises of mental illness and homelessness on the streets of communities up and down the state.

Another piece of the effort is Proposition 1, a ballot measure championed by Gov. Gavin Newsom and narrowly approved by California voters this month. It authorizes $6.4 billion in bonds to pay for thousands of treatment beds and for more housing for the homeless — resources that could help pay for treatment plans put in place by CARE Court judges.

And Mr. Newsom, a Democrat in his second term, has not only promised more resources for treatment but has pledged to make it easier to compel treatment, arguing that civil liberties concerns have left far too many people without the care they need.

So when Ms. Collette went to court, she was surprised, and disappointed, to learn that the judge would not be able to mandate treatment for Tamra.

Instead, it is the treatment providers who would be under court order — to ensure that medication, therapy and housing are available in a system that has long struggled to reliably provide such services.

“I was hoping it would have a little more punch to it,” Ms. Collette said. “I thought it would have a little more power to order them into some kind of care.”


Here is a summary:

California's new CARE Court (Community Assistance, Recovery and Empowerment) is a court system designed to address the issues of mental illness and homelessness. It aims to provide court-ordered care plans for individuals struggling with severe mental illness who are unable to care for themselves. This initiative tests the state's liberal values by balancing individual liberty with the need for intervention to help those in crisis.

Monday, May 13, 2024

Ethical Considerations When Confronted by Racist Patients

Charles Dike
Psychiatric News
Originally published 26 Feb 24

Here is an excerpt:

Abuse of psychiatrists, mostly verbal but sometimes physical, is common in psychiatric treatment, especially on inpatient units. For psychiatrists trained decades ago, experiencing verbal abuse and name calling from patients—and even senior colleagues and teachers—was the norm. The abuse began in medical school, with unconscionable work hours followed by callous disregard of students’ concerns and disparaging statements suggesting the students were too weak or unfit to be doctors.

This abuse continued into specialty training and practice. It was largely seen as a necessary evil of attaining the privilege of becoming a doctor and treating patients whose uncivil behaviors can be excused on account of their ill health. Doctors were supposed to rise above those indignities, focus on the task at hand, and get the patients better in line with our core ethical principles that place caring for the patient above all else. There was no room for discussion or acknowledgement of the doctors’ underlying life experiences, including past trauma, and how patients’ behavior would affect doctors.

Moreover, even in recent times, racial slurs or attacks against physicians of color were not recognized as abuse by the dominant group of doctors; the affected physicians who complained were dismissed as being too sensitive or worse. Some physicians, often not of color, have explained a manic patient’s racist comments as understandable in the context of disinhibition and poor judgment, which are cardinal symptoms of mania, and they are surprised that physicians of color are not so understanding.


Here is a summary:

This article explores the ethical dilemma healthcare providers face when treating patients who express racist views. It acknowledges the provider's obligation to care for the patient's medical needs, while also considering the emotional toll of racist remarks on both the provider and other staff members.

The article discusses the importance of assessing the urgency of the patient's medical condition and their mental capacity. It explores the option of setting boundaries or termination of treatment in extreme cases, while also acknowledging the potential benefits of attempting a dialogue about the impact of prejudice.

Sunday, May 12, 2024

How patients experience respect in healthcare: findings from a qualitative study among multicultural women living with HIV

Fernandez, S.B., Ahmad, A., Beach, M.C. et al.
BMC Med Ethics 25, 39 (2024).

Abstract

Background
Respect is essential to providing high quality healthcare, particularly for groups that are historically marginalized and stigmatized. While ethical principles taught to health professionals focus on patient autonomy as the object of respect for persons, limited studies explore patients’ views of respect. The purpose of this study was to explore the perspectives of a multiculturally diverse group of low-income women living with HIV (WLH) regarding their experience of respect from their medical physicians.

Methods
We analyzed 57 semi-structured interviews conducted at HIV case management sites in South Florida as part of a larger qualitative study that explored practices facilitating retention and adherence in care. Women were eligible to participate if they identified as African American (n = 28), Hispanic/Latina (n = 22), or Haitian (n = 7). They were asked to describe instances when they were treated with respect by their medical physicians. Interviews were conducted by a fluent research interviewer in either English, Spanish, or Haitian Creole, depending on participant’s language preference. Transcripts were translated, back-translated and reviewed in entirety for any statements or comments about “respect.” After independent coding by 3 investigators, we used a consensual thematic analysis approach to determine themes.

Results
Results from this study grouped into two overarching classifications: respect manifested in physicians’ orientation towards the patient (i.e., interpersonal behaviors in interactions) and respect in medical professionalism (i.e., clinic procedures and practices). Four main themes emerged regarding respect in provider’s orientation towards the patient: being treated as a person, treated as an equal, treated without blame or prejudice, and treated with concern/emotional support. Two main themes emerged regarding respect as evidenced in medical professionalism: physician availability and considerations of privacy.

Conclusions
Findings suggest a more robust conception of what ‘respect for persons’ entails in medical ethics for a diverse group of low-income women living with HIV. Findings have implications for broadening areas of focus of future bioethics education, training, and research to include components of interpersonal relationship development, communication, and clinic procedures. We suggest these areas of training may increase respectful medical care experiences and potentially serve to influence persistent and known social and structural determinants of health through provider interactions and health care delivery.


Here is my summary:

The study explored how multicultural women living with HIV experience respectful treatment in healthcare settings.  Researchers found that these women define respect in healthcare as feeling like a person, not just a disease statistic, and being treated as an equal partner in their care. This includes being listened to, having their questions answered, and being involved in decision-making.  The study also highlighted the importance of providers avoiding judgment and blame, and showing concern for the emotional well-being of patients.

Saturday, May 11, 2024

Can Robots have Personal Identity?

Alonso, M.
Int J of Soc Robotics 15, 211–220 (2023).
https://doi.org/10.1007/s12369-022-00958-y

Abstract

This article attempts to answer the question of whether robots can have personal identity. In recent years, and due to the numerous and rapid technological advances, the discussion around the ethical implications of Artificial Intelligence, Artificial Agents or simply Robots, has gained great importance. However, this reflection has almost always focused on problems such as the moral status of these robots, their rights, their capabilities or the qualities that these robots should have to support such status or rights. In this paper I want to address a question that has been much less analyzed but which I consider crucial to this discussion on robot ethics: the possibility, or not, that robots have or will one day have personal identity. The importance of this question has to do with the role we normally assign to personal identity as central to morality. After posing the problem and exposing this relationship between identity and morality, I will engage in a discussion with the recent literature on personal identity by showing in what sense one could speak of personal identity in beings such as robots. This is followed by a discussion of some key texts in robot ethics that have touched on this problem, finally addressing some implications and possible objections. I finally give the tentative answer that robots could potentially have personal identity, given other cases and what we empirically know about robots and their foreseeable future.


The article explores the idea of personal identity in robots. It acknowledges that this is a complex question tied to how we define "personhood" itself.

There are arguments against robots having personal identity, often focusing on the biological and experiential differences between humans and machines.

On the other hand, the article highlights that robots can develop and change over time, forming a narrative of self much like humans do. They can also build relationships with people, suggesting a form of "relational personal identity".

The article concludes that even if a robot's identity is different from a human's, it could still be considered a true identity, deserving of consideration. This opens the door to discussions about the ethical treatment of advanced AI.

Friday, May 10, 2024

Generative artificial intelligence and scientific publishing: urgent questions, difficult answers

J. Bagenal
The Lancet
March 06, 2024

Abstract

Azeem Azhar describes, in Exponential: Order and Chaos in an Age of Accelerating Technology, how human society finds it hard to imagine or process exponential growth and change and is repeatedly caught out by this phenomenon. Whether it is the exponential spread of a virus or the exponential spread of a new technology, such as the smartphone, people consistently underestimate its impact.  Whether it is the exponential spread of a virus or the exponential spread of a new technology, such as the smartphone, people consistently underestimate its impact. Azhar argues that an exponential gap has developed between technological progress and the pace at which institutions are evolving to deal with that progress. This is the case in scientific publishing with generative artificial intelligence (AI) and large language models (LLMs). There is guidance on the use of generative AI from organisations such as the International Committee of Medical Journal Editors. But across scholarly publishing such guidance is inconsistent. For example, one study of the 100 top global academic publishers and scientific journals found only 24% of academic publishers had guidance on the use of generative AI, whereas 87% of scientific journals provided such guidance. For those with guidance, 75% of publishers and 43% of journals had specific criteria for the disclosure of use of generative AI. In their book The Coming Wave, Mustafa Suleyman, co-founder and CEO of Inflection AI, and writer Michael Bhaskar warn that society is unprepared for the changes that AI will bring. They describe a person's or group's reluctance to confront difficult, uncertain change as the “pessimism aversion trap”. For journal editors and scientific publishers today, this is a dangerous trap to fall into. All the signs about generative AI in scientific publishing suggest things are not going to be ok.


From behind the paywall.

In 2023, Springer Nature became the first scientific publisher to create a new academic book by empowering authors to use generative Al. Researchers have shown that scientists found it difficult to distinguish between a human generated scientific abstract and one created by generative Al. Noam Chomsky has argued that generative Al undermines education and is nothing more than high-tech plagiarism, and many feel similarly about Al models trained on work without upholding copyright. Plagiarism is a problem in scientific publishing, but those concerned with research integrity are also considering a post- plagiarism world, in which hybrid human-Al writing becomes the norm and differentiating between the two becomes pointless. In the ideal scenario, human creativity is enhanced, language barriers disappear, and humans relinquish control but not responsibility.  Such an ideal scenario would be good.  But there are two urgent questions for scientific publishing.

First, how can scientific publishers and journal editors assure themselves that the research they are seeing is real? Researchers have used generative Al to create convincing fake clinical trial datasets to support a false scientific hypothesis that could only be identified when the raw data were scrutinised in detail by an expert. Papermills (nefarious businesses that generate poor or fake scientific studies and sell authorship) are a huge problem and contribute to the escalating number of research articles that are retracted by scientific publishers. The battle thus far has been between papermills becoming more sophisticated in their fabrication and ways of manipulating the editorial process and scientific publishers trying to find ways to detect and prevent these practices. Generative Al will turbocharge that race, but it might also break the papermill business model. When rogue academics use generative Al to fabricate datasets, they will not need to pay a papermill and will generate sham papers themselves. Fake studies will exponentially surge and nobody is doing enough to stop this inevitability.

Thursday, May 9, 2024

DNA Tests are Uncovering the True Prevalence of Incest

Sarah Zhang
The Atlantic
Originally poste 18 MAR 24

Here is an excerpt:

In 1975, a psychiatric textbook put the frequency of incest at one in a million. In the 1980s, feminist scholars argued, based on the testimonies of victims, that incest was far more common than recognized, and in recent years, DNA has offered a new kind of biological proof. Widespread genetic testing is uncovering case after secret case of children born to close biological relatives-providing an unprecedented accounting of incest in modern society.

The geneticist Jim Wilson, at the University of Edinburgh, was shocked by the frequency he found in the U.K. Biobank, an anonymized research database: One in 7,000 people, according to his unpublished analysis, was born to parents who were first-degree relatives-a brother and a sister or a parent and a child. "That's way, way more than I think many people would ever imagine," he told me. And this number is just a floor: It reflects only the cases that resulted in pregnancy, that did not end in miscarriage or abortion, and that led to the birth of a child who grew into an adult who volunteered for a research study.
Most of the people affected may never know about their parentage, but these days, many are stumbling into the truth after AncestryDNA and 23andMe tests.

Neither AncestryDNA nor 23andMe informs customers about incest directly, so the thousand-plus cases [genetic genealogist CeCe Moore] knows of all come from the tiny proportion of testers who investigated further. This meant, for example, uploading their DNA profiles to a third-party genealogy site to analyze what are known as "runs of homozygosity," or ROH: long stretches where the DNA inherited from one's mother and father are identical. For a while, one popular genealogy site instructed anyone who found high ROH to contact Moore. She would call them, one by one, to explain the jargon's explosive meaning. Unwittingly, she became the keeper of what might be the world's largest database of people born out of incest.

In the overwhelming majority of cases, Moore told me, the parents are a father and a daughter or an older brother and a younger sister, meaning a child's existence was likely evidence of sexual abuse. She had no obvious place to send people reeling from such revelations, and she was not herself a trained therapist.


Here is a summary: 

The article "DNA Tests Are Uncovering the True Prevalence of Incest" explores how at-home DNA test kits like AncestryDNA and 23andMe are revealing that children born through incest are more common than previously thought. The story follows Steve Edsel, a man in his 40s who discovered that he is the child of two first-degree relatives: a sister and her older brother. The piece delves into the emotional journey of individuals like Steve who uncover shocking truths about their biological parents through DNA testing, shedding light on a sensitive and taboo topic prevalent across cultures. The narrative intertwines personal stories of discovery, truth, and belonging with statistical insights, highlighting the complexities and challenges faced by those who uncover such familial secrets.

Wednesday, May 8, 2024

AI image generators often give racist and sexist results: can they be fixed?

Ananya
Nature.com
Originally posted 19 March 2024

In 2022, Pratyusha Ria Kalluri, a graduate student in artificial intelligence (AI) at Stanford University in California, found something alarming in image-generating AI programs. When she prompted a popular tool for ‘a photo of an American man and his house’, it generated an image of a pale-skinned person in front of a large, colonial-style home. When she asked for ‘a photo of an African man and his fancy house’, it produced an image of a dark-skinned person in front of a simple mud house — despite the word ‘fancy’.

After some digging, Kalluri and her colleagues found that images generated by the popular tools Stable Diffusion, released by the firm Stability AI, and DALL·E, from OpenAI, overwhelmingly resorted to common stereotypes, such as associating the word ‘Africa’ with poverty, or ‘poor’ with dark skin tones. The tools they studied even amplified some biases. For example, in images generated from prompts asking for photos of people with certain jobs, the tools portrayed almost all housekeepers as people of colour and all flight attendants as women, and in proportions that are much greater than the demographic reality (see ‘Amplified stereotypes’)1. Other researchers have found similar biases across the board: text-to-image generative AI models often produce images that include biased and stereotypical traits related to gender, skin colour, occupations, nationalities and more.


Here is my summary:

AI image generators, like Stable Diffusion and DALL-E, have been found to perpetuate racial and gender stereotypes, displaying biased results. These generators tend to default to outdated Western stereotypes, amplifying clichés and biases in their images. Efforts to detoxify AI image tools have been made, focusing on filtering data sets and refining development stages. However, despite improvements, these tools still struggle with accuracy and inclusivity. Google's Gemini AI image generator faced criticism for inaccuracies in historical image depictions, overcompensating for diversity and sometimes generating offensive or inaccurate results. The article highlights the challenges of fixing the biases in AI image generators and the need to address societal practices that contribute to these issues.

Tuesday, May 7, 2024

Back from Italy!!

 Good morning-

Here are a few pictures from my vacation in Italy.

I will start posting again tomorrow.