Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, September 30, 2024

Antidiscrimination Law Meets AI—New Requirements for Clinicians, Insurers, & Health Care Organizations

Mello, M. M., & Roberts, J. L. (2024).
JAMA Health Forum, 5(8), e243397–e243397.

Responding to the threat that biased health care artificial intelligence (AI) tools pose to health equity, the US Department of Health and Human Services Office for Civil Rights (OCR) published a final rule in May 2024 holding AI users legally responsible for managing the risk of discrimination. This move raises questions about the rule’s fairness and potential effects on AI-enabled health care.

The New Regulatory Requirements

Section 1557 of the Affordable Care Act prohibits recipients of federal funding from discriminating in health programs and activities based on race, color, national origin, sex, age, or disability. Regulated entities include health care organizations, health insurers, and clinicians that participate in Medicare, Medicaid, or other programs. The OCR’s rule sets forth the obligations of these entities relating to the use of decision support tools in patient care, including AI-driven tools and simpler, analog aids like flowcharts and guidelines.

The rule clarifies that Section 1557 applies to discrimination arising from use of AI tools and establishes new legal requirements. First, regulated entities must make “reasonable efforts” to determine whether their decision support tools use protected traits as input variables or factors. Second, for tools that do so, organizations “must make reasonable efforts to mitigate the risk of discrimination.”

Starting in May 2025, the OCR will address potential violations of the rule through complaint-driven investigations and compliance reviews. Individuals can also seek to enforce Section 1557 through private lawsuits. However, courts disagree about whether private actors can sue for disparate impact (practices that are neutral on their face but have discriminatory effects).

---------------------

Here are some thoughts:

Addressing Bias in Healthcare AI: New Regulatory Requirements and Implications

The US Department of Health and Human Services Office for Civil Rights (OCR) has issued a final rule holding healthcare providers liable for managing the risk of discrimination in AI tools used in patient care. This move aims to address the threat of biased healthcare AI tools to health equity.

New Regulatory Requirements

The OCR's rule clarifies that Section 1557 of the Affordable Care Act applies to discrimination arising from the use of AI tools. Regulated entities must make "reasonable efforts" to determine whether their decision support tools use protected traits as input variables or factors. If so, they must mitigate the risk of discrimination.

Fairness and Enforcement

The rule raises questions about fairness and potential effects on AI-enabled healthcare. While the OCR's approach is flexible, it may create uncertainty for regulated entities. The rule applies only to organizations using AI tools, not developers, who are regulated by other federal rules. The OCR's enforcement will focus on complaint-driven investigations and compliance reviews, with penalties including corrective action plans.

Implications and Concerns

The rule may create market pressure for developers to generate and provide information about bias in their products. However, concerns remain about the compliance burden on adopters, particularly small physician practices and low-resourced organizations. The OCR must provide further guidance and clarification to ensure meaningful compliance.

Facilitating Meaningful Compliance

Additional resources are necessary to make compliance possible for all healthcare organizations. Emerging tools for bias assessment and affordable technical assistance are essential. The question of who will pay for AI assessments looms large, and the business case for adopting AI tools may evaporate if assessment and monitoring costs are high and not reimbursed.

Conclusion

The OCR's rule is an important step towards reducing discrimination in healthcare AI. However, realizing this vision requires resources to make meaningful compliance possible for all healthcare organizations. By addressing bias and promoting equity, we can ensure that AI tools benefit all patients, particularly vulnerable populations.

Sunday, September 29, 2024

Whistleblowing in science: this physician faced ostracization after standing up to pharma

Sara Reardon
nature.com
Originally posted 20 Aug 24

The image of a lone scientist standing up for integrity against a pharmaceutical giant seems romantic and compelling. But to haematologist Nancy Olivieri, who went public when the company sponsoring her drug trial for a genetic blood disorder tried to suppress data about harmful side effects, the experience was as unglamorous as it was damaging and isolating. “There’s a lot of people who fight for justice in research integrity and against the pharmaceutical industry, but very few people know what it’s like to take on the hospital administrators” too, she says.

Now, after more than 30 years of ostracization by colleagues, several job losses and more than 20 lawsuits — some of which are ongoing — Olivieri is still amazed that what she saw as efforts to protect her patients could have proved so controversial, and that so few people took her side. Last year, she won the John Maddox Prize, a partnership between the London-based charity Sense about Science and Nature, which recognizes “researchers who stand up and speak out for science” and who achieve changes amid hostility. “It’s absolutely astounding to me that you could become famous as a physician for saying, ‘I think there might be a complication here,’” she says. “There was a lot of really good work that we could have done that we wasted a lot of years not doing because of all this.”

Olivieri didn’t set out to be a troublemaker. As a young researcher at the University of Toronto (UT), Canada, in the 1980s, she worked with children with thalassaemia — a blood condition that prevents the body from making enough oxygen-carrying haemoglobin, and that causes a fatal build-up of iron in the organs if left untreated. She worked her way up to become head of the sickle-cell-disease programme at the city’s Hospital for Sick Children (SickKids). In 1989, she started a clinical trial at SickKids to test a drug called deferiprone that traps iron in the blood. The hospital eventually brought in a pharmaceutical company called Apotex, based in Toronto, Canada, to co-sponsor the study as part of regulatory requirements.


Here are some thoughts:

The case of Nancy Olivieri, a haematologist who blew the whistle on a pharmaceutical company's attempts to suppress data about harmful side effects of a drug, highlights the challenges and consequences faced by researchers who speak out against industry and institutional pressures. Olivieri's experience demonstrates how institutions can turn against researchers who challenge industry interests, leading to isolation, ostracization, and damage to their careers. Despite the risks, Olivieri's story emphasizes the crucial role of allies and support networks in helping whistle-blowers navigate the challenges they face.

The case also underscores the importance of maintaining research integrity and transparency, even in the face of industry pressure. Olivieri's experience shows that prioritizing patient safety and well-being over industry interests is critical, and institutions must be held accountable for their actions. Additionally, the significant emotional toll that whistle-blowing can take on individuals, including anxiety, isolation, and disillusionment, must be acknowledged.

To address these issues, policy reforms are necessary to protect researchers from retaliation and ensure that they can speak out without fear of retribution. Industry transparency is also essential to minimize conflicts of interest. Furthermore, institutions and professional organizations must establish support networks for researchers who speak out against wrongdoing.

Saturday, September 28, 2024

Humanizing Chatbots Is Hard To Resist — But Why?

Madeline G. Reinecke
Practical Ethics
Originally posted 30 Aug 24

You might recall a story from a few years ago, concerning former Google software engineer Blake Lemoine. Part of Lemoine’s job was to chat with LaMDA, a large language model (LLM) in development at the time, to detect discriminatory speech. But the more Lemoine chatted with LaMDA, the more he became convinced: The model had become sentient and was being deprived of its rights as a Google employee. 

Though Google swiftly denied Lemoine’s claims, I’ve since wondered whether this anthropomorphic phenomenon — seeing a “mind in the machine” — might be a common occurrence in LLM users. In fact, in this post, I’ll argue that it’s bound to be common, and perhaps even irresistible, due to basic facts about human psychology. 

Emerging work suggests that a non-trivial number of people do attribute humanlike characteristics to LLM-based chatbots. This is to say they “anthropomorphize” them. In one study, 67% of participants attributed some degree of phenomenal consciousness to ChatGPT: saying, basically, there is “something it is like” to be ChatGPT. In a separate survey, researchers showed participants actual ChatGPT transcripts, explaining that they were generated by an LLM. Actually seeing the natural language “skills” of ChatGPT further increased participants’ tendency to anthropomorphize the model. These effects were especially pronounced for frequent LLM users.

Why does anthropomorphism of these technologies come so easily? Is it irresistible, as I’ve suggested, given features of human psychology?


Here are some thoughts:

The article explores the phenomenon of anthropomorphism in Large Language Models (LLMs), where users attribute human-like characteristics to AI systems. This tendency is rooted in human psychology, particularly in our inclination to over-detect agency and our association of communication with agency. Studies have shown that a significant number of people, especially frequent users, attribute human-like characteristics to LLMs, raising concerns about trust, misinformation, and the potential for users to internalize inaccurate information.

The article highlights two key cognitive mechanisms underlying anthropomorphism. Firstly, humans have a tendency to over-detect agency, which may have evolved as an adaptive mechanism to detect potential threats. This is exemplified in a classic psychology study where participants attributed human-like actions to shapes moving on a screen. Secondly, language is seen as a sign of agency, even in preverbal infants, which may explain why LLMs' command of natural language serves as a psychological signal of agency.

The author argues that AI developers have a key responsibility to design systems that mitigate anthropomorphism. This can be achieved through design choices such as using disclaimers or avoiding the use of first-personal pronouns. However, the author also acknowledges that these measures may not be sufficient to override the deep tendencies of the human mind. Therefore, a priority for future research should be to investigate whether good technology design can help us resist the pitfalls of LLM-oriented anthropomorphism.

Ultimately, anthropomorphism is a double-edged sword, making AI systems more relatable and engaging while also risking misinformation and mistrust. By understanding the cognitive mechanisms underlying anthropomorphism, we can develop strategies to mitigate its negative consequences. Future research directions should include investigating effective interventions, exploring the boundaries of anthropomorphism, and developing responsible AI design guidelines that account for anthropomorphism.

Friday, September 27, 2024

Small town living: Unique ethical challenges of rural pediatric integrated primary care

Jaques-Leonard, M. L., et al. (2021).
Clinical Practice in Pediatric Psychology,
9(4), 412–422.

Abstract

Objective: The objective of this paper is to address ethical and training considerations with behavioral health (BH) services practicing within rural, integrated primary care (IPC) sites through the conceptual framework of an ethical acculturation model.

Method: Relevant articles are presented along with a description of how the acculturation model can be implemented to address ethical dilemmas.

Results: Recommendations are provided regarding practice considerations when using the acculturation model and the utility of the model for both established BH practitioners and trainees.

Conclusions: Psychologists integrated into rural IPC teams may be able to enhance their ethical practice and improve outcomes for patients and families through the use of the acculturation model. Psychologists serving as supervisors can utilize the acculturation model to provide valuable experiences to trainees in addressing ethical dilemmas when competing ethical principles are present.

Impact Statement

Implications for Impact Statement: By addressing ethical dilemmas through an acculturation model, psychologists may prevent themselves from drifting away from American Psychological Association ethical principles within the context of a multidisciplinary team while simultaneously providing valuable learning opportunities for trainees. This focus is particularly important in rural settings where access to specialty care and other resources are limited, and a psychologist may be the only licensed behavioral health provider on a multidisciplinary team.

Thursday, September 26, 2024

Decoding loneliness: Can explainable AI help in understanding language differences in lonely older adults?

Wang, N., et al. (2024).
Psychiatry research, 339, 116078.

Abstract

Study objectives
Loneliness impacts the health of many older adults, yet effective and targeted interventions are lacking. Compared to surveys, speech data can capture the personalized experience of loneliness. In this proof-of-concept study, we used Natural Language Processing to extract novel linguistic features and AI approaches to identify linguistic features that distinguish lonely adults from non-lonely adults.

Methods
Participants completed UCLA loneliness scales and semi-structured interviews (sections: social relationships, loneliness, successful aging, meaning/purpose in life, wisdom, technology and successful aging). We used the Linguistic Inquiry and Word Count (LIWC-22) program to analyze linguistic features and built a classifier to predict loneliness. Each interview section was analyzed using an explainable AI (XAI) model to classify loneliness.

Results
The sample included 97 older adults (age 66–101 years, 65 % women). The model had high accuracy (Accuracy: 0.889, AUC: 0.8), precision (F1: 0.8), and recall (1.0). The sections on social relationships and loneliness were most important for classifying loneliness. Social themes, conversational fillers, and pronoun usage were important features for classifying loneliness.

Conclusions
XAI approaches can be used to detect loneliness through the analyses of unstructured speech and to better understand the experience of loneliness.
------------

Here are some thoughts.  AI has the potential to be helpful for mental health professionals.

Scientists have made a groundbreaking discovery in detecting loneliness through artificial intelligence (AI). A recent study published reveals that AI can identify loneliness by analyzing unstructured speech patterns. This innovative approach offers a promising solution for addressing loneliness, particularly among older adults.

The analysis showed that lonely individuals frequently referenced social status, religion, and expressed more negative emotions. In contrast, non-lonely individuals focused on social connections, family, and lifestyle. Additionally, lonely individuals used more first-person singular pronouns, indicating a self-focused perspective, whereas non-lonely individuals used more first-person plural pronouns, suggesting a sense of inclusion and connection.

Furthermore, the study found that conversational fillers, non-fluencies, and internet slang were more prevalent in the speech of lonely individuals. Lonely individuals also used more causation conjunctions, indicating a tendency to provide detailed explanations of their experiences. These findings suggest that the way people communicate may reflect their feelings about social relationships.

The AI model offers a scalable and less intrusive method for assessing loneliness, which can significantly impact mental and physical health, particularly in older adults. While the study has limitations, including a relatively small sample size, the researchers aim to expand their work to more diverse populations and explore how to better assess loneliness.

Wednesday, September 25, 2024

Vote for Kamala Harris to Support Science, Health and the Environment

The Editors
Scientific American
Originally posted 16 Sept 24

In the November election, the U.S. faces two futures. In one, the new president offers the country better prospects, relying on science, solid evidence and the willingness to learn from experience. She pushes policies that boost good jobs nationwide by embracing technology and clean energy. She supports education, public health and reproductive rights. She treats the climate crisis as the emergency it is and seeks to mitigate its catastrophic storms, fires and droughts.

In the other future, the new president endangers public health and safety and rejects evidence, preferring instead nonsensical conspiracy fantasies. He ignores the climate crisis in favor of more pollution. He requires that federal officials show personal loyalty to him rather than upholding U.S. laws. He fills positions in federal science and other agencies with unqualified ideologues. He goads people into hate and division, and he inspires extremists at state and local levels to pass laws that disrupt education and make it harder to earn a living.

Only one of these futures will improve the fate of this country and the world. That is why, for only the second time in our magazine’s 179-year history, the editors of Scientific American are endorsing a candidate for president. That person is Kamala Harris.

Before making this endorsement, we evaluated Harris’s record as a U.S. senator and as vice president under Joe Biden, as well as policy proposals she’s made as a presidential candidate. Her opponent, Donald Trump, who was president from 2017 to 2021, also has a record—a disastrous one. Let’s compare.


Here are some thoughts:

The upcoming U.S. presidential election presents two vastly different futures for the country. On one hand, Vice President Kamala Harris offers a vision built on science, evidence, and a willingness to learn from experience. Her policies focus on creating good jobs, promoting clean energy, supporting education, public health, and reproductive rights, and addressing the climate crisis.

On the other hand, former President Donald Trump's vision rejects evidence and relies on conspiracy theories. His policies endanger public health and safety, ignore the climate crisis, and promote division and extremism.

Key Policy Differences:
  • Healthcare: Harris supports expanding the Affordable Care Act and Medicaid, while Trump proposes cuts to Medicare and Medicaid and repealing the ACA.
  • Reproductive Rights: Harris advocates for reinstating Roe v. Wade protections, while Trump appointed justices who overturned it and restricts access to abortion.
  • Gun Safety: Harris supports closing gun-show loopholes, while Trump promises to undo Biden-Harris gun measures.
  • Environment and Climate: Harris acknowledges climate change and supports renewable energy, while Trump denies it and rolled back environmental policies.
  • Technology: Harris promotes safe AI development, while Trump's Project 2025 framework would overturn AI safeguards.
  • Economic Implications: Harris's platform aims to create jobs in rural America through renewable energy projects and increase tax deductions for small businesses. Trump's policies may lead to increased pollution, division, and economic uncertainty.
Conclusion:

The choice between Harris and Trump represents two distinct futures for the U.S. Harris offers a path forward guided by rationality and respect, while Trump promotes division and demagoguery ¹. The outcome of this election will significantly impact the country's direction.

Tuesday, September 24, 2024

This researcher wants to replace your brain, little by little

Antonio Regalado
MIT Technology Review
Originally posted 16 Aug 24

Jean Hébert, a new hire with the US Advanced Projects Agency for Health (ARPA-H), is expected to lead a major new initiative around “functional brain tissue replacement,” the idea of adding youthful tissue to people’s brains. 

President Joe Biden created ARPA-H in 2022, as an agency within the Department of Health and Human Services, to pursue what he called  “bold, urgent innovation” with transformative potential.

The brain renewal concept could have applications such as treating stroke victims, who lose areas of brain function. But Hébert, a biologist at the Albert Einstein school of medicine, has most often proposed total brain replacement, along with replacing other parts of our anatomy, as the only plausible means of avoiding death from old age.

As he described in his 2020 book, Replacing Aging, Hébert thinks that to live indefinitely people must find a way to substitute all their body parts with young ones, much like a high-mileage car is kept going with new struts and spark plugs.


Here are some thoughts:

The US Advanced Projects Agency for Health (ARPA-H) has taken a bold step by hiring Jean Hébert, a researcher who advocates for a radical plan to defeat death by replacing human body parts, including the brain. Hébert's idea involves progressively replacing brain tissue with youthful lab-made tissue, allowing the brain to adapt and maintain memories and self-identity. This concept is not widely accepted in the scientific community, but ARPA-H has endorsed Hébert's proposal with a potential $110 million project to test his ideas in animals.

From an ethical standpoint, Hébert's proposal raises concerns, such as the potential use of human fetuses as a source of life-extending parts and the creation of non-sentient human clones for body transplants. However, Hébert's idea relies on the brain's ability to adapt and reorganize itself, a concept supported by evidence from rare cases of benign brain tumors and experiments with fetal-stage cell transplants. The development of youthful brain tissue facsimiles using stem cells is a significant scientific challenge, requiring the creation of complex structures with multiple cell types.

The success of Hébert's proposal depends on various factors, including the ability of young brain tissue to function correctly in an elderly person's brain, establishing connections, and storing and sending electro-chemical information. Despite these uncertainties, ARPA-H's endorsement and potential funding of Hébert's proposal demonstrate a willingness to explore unconventional approaches to address aging and age-related diseases. This move may pave the way for future research in extreme life extension and challenge societal norms and values surrounding aging and mortality.

Hébert's work has sparked interest among immortalists, a fringe community devoted to achieving eternal life. His connections to this community and his willingness to explore radical approaches have made him an edgy choice for ARPA-H. However, his focus on the neocortex, the outer part of the brain responsible for most of our senses, reasoning, and memory, may hold the key to understanding how to replace brain tissue without losing essential functions. As Hébert embarks on this ambitious project, the scientific community will be watching closely to see if his ideas can overcome the significant scientific and ethical hurdles associated with replacing human brain tissue.

Monday, September 23, 2024

Generative AI Can Harm Learning

Bastani, H. et al. (July 15, 2024).
Available at SSRN:

Abstract

Generative artificial intelligence (AI) is poised to revolutionize how humans work, and has already demonstrated promise in significantly improving human productivity. However, a key remaining question is how generative AI affects learning, namely, how humans acquire new skills as they perform tasks. This kind of skill learning is critical to long-term productivity gains, especially in domains where generative AI is fallible and human experts must check its outputs. We study the impact of generative AI, specifically OpenAI's GPT-4, on human learning in the context of math classes at a high school. In a field experiment involving nearly a thousand students, we have deployed and evaluated two GPT based tutors, one that mimics a standard ChatGPT interface (called GPT Base) and one with prompts designed to safeguard learning (called GPT Tutor). These tutors comprise about 15% of the curriculum in each of three grades. Consistent with prior work, our results show that access to GPT-4 significantly improves performance (48% improvement for GPT Base and 127% for GPT Tutor). However, we additionally find that when access is subsequently taken away, students actually perform worse than those who never had access (17% reduction for GPT Base). That is, access to GPT-4 can harm educational outcomes. These negative learning effects are largely mitigated by the safeguards included in GPT Tutor. Our results suggest that students attempt to use GPT-4 as a "crutch" during practice problem sessions, and when successful, perform worse on their own. Thus, to maintain long-term productivity, we must be cautious when deploying generative AI to ensure humans continue to learn critical skills.


Here are some thoughts:

The deployment of GPT-based tutors in educational settings presents a cautionary tale. While generative AI tools like ChatGPT can make tasks significantly easier for humans, they also risk deteriorating our ability to effectively learn essential skills. This phenomenon is not new, as previous technologies like typing and calculators have also reduced the need for certain skills. However, ChatGPT's broader intellectual capabilities and propensity for providing incorrect responses make it unique.

Unlike earlier technologies, ChatGPT's unreliability and tendency to provide incorrect responses pose significant challenges. Students may struggle to detect these errors or be unwilling to invest the effort required to verify the accuracy of ChatGPT's responses. This can negatively impact their learning and understanding of critical skills. The text suggests that more work is needed to ensure generative AI enhances education rather than diminishes it.

The findings underscore the importance of critical thinking and media literacy in the age of AI. Educators must be aware of the potential risks and benefits of AI-powered tools and design them to augment human capabilities rather than replace them. Accountability and transparency in AI development and deployment are crucial to mitigating these risks. By acknowledging these challenges, we can harness the potential of AI to enhance education and promote meaningful learning.

Sunday, September 22, 2024

The staggering death toll of scientific lies

Kelsey Piper
vox.com
Originally posted 23 Aug 24

Here is an excerpt:

The question of whether research fraud should be a crime

In some cases, research misconduct may be hard to distinguish from carelessness.

If a researcher fails to apply the appropriate statistical correction for multiple hypothesis testing, they will probably get some spurious results. In some cases, researchers are heavily incentivized to be careless in these ways by an academic culture that puts non-null results above all else (that is, rewarding researchers for finding an effect even if it is not a methodologically sound one, while being unwilling to publish sound research if it finds no effect).

But I’d argue it’s a bad idea to prosecute such behavior. It would produce a serious chilling effect on research, and likely make the scientific process slower and more legalistic — which also results in more deaths that could be avoided if science moved more freely.

So the conversation about whether to criminalize research fraud tends to focus on the most clear-cut cases: intentional falsification of data. Elisabeth Bik, a scientific researcher who studies fraud, made a name for herself by demonstrating that photographs of test results in many medical journals were clearly altered. That’s not the kind of thing that can be an innocent mistake, so it represents something of a baseline for how often manipulated data is published.

While technically some scientific fraud could fall under existing statutes that prohibit lying on, say, a grant application, in practice scientific fraud is more or less never prosecuted. Poldermans eventually lost his job in 2011, but most of his papers weren’t even retracted, and he faced no further consequences.


Here are some thoughts:

The case of Don Poldermans, a cardiologist who falsified data, resulting in thousands of deaths, highlights the severe consequences of scientific misconduct. This instance demonstrates how fraudulent research can have devastating effects on patients' lives. The fact that Poldermans' data was found to be fake, yet his research was still widely accepted and implemented, raises serious concerns about the accountability and oversight within the scientific community.

The current consequences for scientific fraud are often inadequate, allowing perpetrators to go unpunished or face minimal penalties. This lack of accountability creates an environment where misconduct can thrive, putting lives at risk. In Poldermans' case, he lost his job but faced no further consequences, despite the severity of his actions.

Prosecution or external oversight could provide the necessary accountability and shift incentives to address misconduct. However, prosecution is a blunt tool and may not be the best solution. Independent scientific review boards could also be effective in addressing scientific fraud. Ultimately, building institutions within the scientific community to police misconduct has had limited success, suggesting a need for external institutions to play a role.

The need for accountability and consequences for scientific fraud cannot be overstated. It is essential to prevent harm and ensure the integrity of research. By implementing measures to address misconduct, we can protect patients and maintain trust in the scientific community. The Poldermans case serves as a stark reminder of the importance of addressing scientific fraud and ensuring accountability.

Saturday, September 21, 2024

Should extreme misogyny be labelled terrorism?

Alexander Horne
The Spectator
Originally posted 19 Aug 24

The Home Secretary Yvette Cooper has reportedly ‘ordered a review’ of Britain’s counter-extremism strategy. According to the Daily Telegraph, she was minded to treat ‘extreme misogyny’ as terrorism for the first time. It is suggested that the review would be completed later in the autumn, and that a new counter-extremism strategy would be launched early next year.

When discussing this issue, it is tempting to use the terms ‘terrorism’ and ‘extremism’ interchangeably. In law, however, they are not identical and should not be conflated. The definition of terrorism is contained in section 1 of the Terrorism Act 2000 and captures actions, or threats of action, designed to influence the government, or intimidate the public (or sections of the public) where such activities are made for the purpose of advancing a political, religious, racial or ideological cause. Actions and threats covered by section 1 include those which would involve serious violence against a person, endanger a person’s life, or create a serious risk to the safety of the public or a section of the public.

Organisations engaged in terrorism-related activity can be banned by the state. The dissemination of terrorist publications, and the promotion or encouragement of terrorism (including the glorification of terrorism), is also illegal.


Here are some thoughts:

The UK Home Secretary, Yvette Cooper, has ordered a review of Britain's counter-extremism strategy, considering treating "extreme misogyny" as terrorism for the first time. However, the distinction between terrorism and extremism is crucial, with terrorism being a legally defined term under the Terrorism Act 2000, whereas extremism is a broader and less defined concept. The review aims to address the complexities of combating extremism while avoiding the pitfalls of restricting lawful activities and conflicting with fundamental rights and freedoms.

The government's efforts to restrict extremism have faced criticism and challenges in the past, with concerns about clarifying the definition of extremism and avoiding conflicts with existing legal frameworks. The recent review by Sir William Shawcross highlighted the need for a proportionate approach to addressing all extremist ideologies. Any new measures to limit extremist activities must be taken in an even-handed way, addressing violent misogynistic views across the board, and enforcing existing laws more effectively, impartially, and justly.

Friday, September 20, 2024

Machine Psychology: Investigating emergent capabilities and behavior in large language models using psychological methods

Hagendorff, T. et al. (2023).
arXiv (Cornell University).

Abstract

Large language models (LLMs) show increasingly advanced emergent capabilities and are being incorporated across various societal domains. Understanding their behavior and reasoning abilities therefore holds significant importance. We argue that a fruitful direction for research is engaging LLMs in behavioral experiments inspired by psychology that have traditionally been aimed at understanding human cognition and behavior. In this article, we highlight and summarize theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table. It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks and focuses instead on computational insights that move us toward a better understanding and discovery of emergent abilities and behavioral patterns in LLMs. We review existing work taking this approach, synthesize best practices, and highlight promising future directions. We also highlight the important caveats of applying methodologies designed for understanding humans to machines. We posit that leveraging tools from experimental psychology to study AI will become increasingly valuable as models evolve to be more powerful, opaque, multi-modal, and integrated into complex real-world settings.

Here are some thoughts:

Machine psychology is an emerging field that aims to understand the complex behaviors of large language models (LLMs) by applying experimental methods traditionally used in psychology. By treating LLMs as participants in psychological experiments, researchers can gain valuable insights into their reasoning, decision-making, and potential biases. This approach goes beyond traditional performance metrics, focusing instead on uncovering the underlying mechanisms of LLM behavior. While caution is necessary to avoid over-humanizing these models, the careful application of psychological concepts can significantly enhance our ability to explain, predict, and safely develop LLMs.

Thursday, September 19, 2024

Who is an AI Ethicist? An Empirical Study of Expertise, Skills, and Profiles to Build a Competency Framework

Cocchiaro, M. Z., Morley, J., (July 10, 2024).

Abstract

Over the last decade the figure of the AI Ethicist has seen significant growth in the ICT market. However, only a few studies have taken an interest in this professional profile, and they have yet to provide a normative discussion of its expertise and skills. The goal of this article is to initiate such discussion. We argue that AI Ethicists should be experts and use a heuristic to identify them. Then, we focus on their specific kind of moral expertise, drawing on a parallel with the expertise of Ethics Consultants in clinical settings and on the bioethics literature on the topic. Finally, we highlight the differences between Health Care Ethics Consultants and AI Ethicists and derive the expertise and skills of the latter from the roles that AI Ethicists should have in an organisation.

Here are some thoughts:

The role of AI Ethicists has expanded significantly in the Information and Communications Technology (ICT) market over the past decade, yet there is a lack of studies providing a normative discussion on their expertise and skills. This article aims to initiate such a discussion by arguing that AI Ethicists should be considered experts, using a heuristic to identify them. It draws parallels with Ethics Consultants in clinical settings and bioethics literature to define their specific moral expertise. The article also highlights the differences between Health Care Ethics Consultants and AI Ethicists, deriving the latter's expertise and skills from their organizational roles.

Key elements for establishing and recognizing the AI Ethicist profession include credibility, independence, and the avoidance of conflicts of interest. The article emphasizes the need for AI Ethicists to be free from conflicts of interest to avoid ethical washing and to express critical viewpoints. It suggests that AI Ethicists might face civil liability risks and could benefit from protections such as civil liability insurance.

The development of professional associations and certifications can help establish a professional identity and quality criteria, enhancing the credibility of AI Ethicists. The article concludes by addressing the discrepancy between principles for trustworthy AI and the actual capabilities of professionals navigating AI ethics, advocating for AI Ethicists to be not only facilitators but also researchers and educators. It outlines the necessary skills and knowledge for AI Ethicists to effectively address questions in AI Ethics.

Wednesday, September 18, 2024

Dimensions of wisdom perception across twelve countries on five continents

Rudnev, M., Barrett, H.C., Buckwalter, W. et al.
Nat Commun 15, 6375 (2024).

Abstract

Wisdom is the hallmark of social judgment, but how people across cultures recognize wisdom remains unclear—distinct philosophical traditions suggest different views of wisdom’s cardinal features. We explore perception of wise minds across 16 socio-economically and culturally diverse convenience samples from 12 countries. Participants assessed wisdom exemplars, non-exemplars, and themselves on 19 socio-cognitive characteristics, subsequently rating targets’ wisdom, knowledge, and understanding. Analyses reveal two positively related dimensions—Reflective Orientation and Socio-Emotional Awareness. These dimensions are consistent across the studied cultural regions and interact when informing wisdom ratings: wisest targets—as perceived by participants—score high on both dimensions, whereas the least wise are not reflective but moderately socio-emotional. Additionally, individuals view themselves as less reflective but more socio-emotionally aware than most wisdom exemplars. Our findings expand folk psychology and social judgment research beyond the Global North, showing how individuals perceive desirable cognitive and socio-emotional qualities, and contribute to an understanding of mind perception.

Here are some thoughts:

In the context of challenging life decisions under uncertainty, individuals perceive wisdom in themselves and others along two key dimensions: Reflective Orientation and Socio-Emotional Awareness. This study found that these dimensions are consistent across eight cultural regions and thirteen languages, suggesting they may represent psychological universals. The research emphasizes characteristics attributed to wise decision-making, contrasting with previous studies focused on social judgment about groups or general mental states. The results indicate that the structure of wisdom perception dimensions remains stable across diverse cultures, although further research is needed to confirm their universality in other regions.

One significant finding is that when Reflective Orientation is held constant, Socio-Emotional Awareness negatively correlates with wisdom ratings. This means that individuals perceived as more caring may be viewed as less wise if they are equally reflective. For instance, people who act impulsively or are overly emotional might be admired but not considered wise. Thus, Reflective Orientation appears to be a necessary condition for higher wisdom ratings, while Socio-Emotional Awareness contributes positively only when Reflective Orientation is satisfied.

The study also noted high cross-cultural agreement regarding Reflective Orientation, but considerable variation in Socio-Emotional Awareness. This suggests that cultural norms heavily influence perceptions of caring behaviors. Reflective Orientation may be viewed as the primary element of wisdom across cultures, while Socio-Emotional Awareness is seen as a secondary, context-dependent aspect. This aligns with cultural narratives that often depict wise individuals, such as philosophers, who are revered for their insights despite being socially detached.

Tuesday, September 17, 2024

A cortical surface template for human neuroscience

Feilong, M., Jiahui, G., Gobbini, M.I. et al.
Nat Methods (2024).

Abstract

Neuroimaging data analysis relies on normalization to standard anatomical templates to resolve macroanatomical differences across brains. Existing human cortical surface templates sample locations unevenly because of distortions introduced by inflation of the folded cortex into a standard shape. Here we present the onavg template, which affords uniform sampling of the cortex. We created the onavg template based on openly available high-quality structural scans of 1,031 brains—25 times more than existing cortical templates. We optimized the vertex locations based on cortical anatomy, achieving an even distribution. We observed consistently higher multivariate pattern classification accuracies and representational geometry inter-participant correlations based on onavg than on other templates, and onavg only needs three-quarters as much data to achieve the same performance compared with other templates. The optimized sampling also reduces CPU time across algorithms by 1.3–22.4% due to less variation in the number of vertices in each searchlight.

Here are some thoughts:

Neuroscientists face challenges in comparing brain activity data across individuals due to variations in brain shape. To address this, researchers align data to a common reference using cortical surface templates, which map brain activity onto a brain model. Traditional templates, based on 40 brains, have limitations such as uneven sampling and reliance on a spherical brain model, leading to biases and distortions in data analysis.

To improve this, a Dartmouth team developed the "onavg" template using data from 1,031 brain scans from OpenNeuro. This template better represents the human brain by accurately mapping its geometric shape and ensuring even distribution of data points, reducing biases. The onavg template was tested and found to provide more accurate and reliable data with less computational effort, outperforming older models.

Key advantages of the onavg template include:
  • More accurate mapping of brain activity, especially in previously underrepresented areas.
  • Increased efficiency, requiring less data for reliable results, which is beneficial for costly or rare data collection.
  • Reduced computational time, facilitating quicker data analysis in large-scale studies.
  • Improved replicability and reproducibility of research findings.
Despite its advancements, onavg has limitations. It is still an approximation and may not fully capture individual brain variations. It was mainly tested in specific neuroimaging contexts, and further validation is needed across diverse tasks and populations. The template's development relied on data from healthy individuals, suggesting future research should include more diverse populations.

The onavg template is freely available to the scientific community, and its developers are optimistic about its broad impact in neuroscience, particularly in studies of vision, hearing, language, and neurological disorders.

Monday, September 16, 2024

Utah Supreme Court Rules That Alleged Sexual Assault by a Doctor Is Not “Health Care”

Jessica Miller
The Salt Lake Tribune
ProPublica
Originally posted 9 August 24

Sexual assault is not health care, and it isn’t covered by Utah’s medical malpractice law, the state’s Supreme Court ruled on Thursday. The decision revives a lawsuit filed by 94 women who allege their OB-GYN sexually abused them during exams or while he delivered their babies.

In 2022, the group of women sued Dr. David Broadbent and two hospitals where he had worked, wanting to seek civil damages. But a judge dismissed their case because he decided they had filed it incorrectly as a civil sexual assault claim rather than a medical malpractice case. The women had all been seeking health care, Judge Robert Lunnen wrote, and Broadbent was providing that when the alleged assaults happened.

The Salt Lake Tribune and ProPublica covered the decision, speaking with women about the lower court ruling that made it harder for them to sue the doctor for his alleged actions. After that story ran, the state Legislature voted to reform medical malpractice law to exclude sexual assault. But the new law didn’t apply retroactively; the women still had no way to sue.

So they took their case to the Utah Supreme Court, where their attorneys argued that the lower court judge had made an error in his decision. The high court agreed. Broadbent’s alleged conduct, it found, was not a part of the women’s health care — and therefore, not covered by Utah’s medical malpractice laws.


Here are some thoughts:

The Utah Supreme Court has ruled that sexual assault does not constitute health care, thereby reviving a lawsuit filed by 94 women against Dr. David Broadbent, an OB-GYN accused of sexually abusing them during medical exams and childbirth. The lawsuit, initially dismissed by a lower court judge who categorized it as a medical malpractice case, was brought back to life by the Supreme Court's decision. The women had sought civil damages against Broadbent and two hospitals where he practiced, but the judge had previously ruled that the alleged assaults occurred during health care provision, thus falling under medical malpractice laws. The Supreme Court, however, found that Broadbent's actions were not part of legitimate medical treatment, allowing the case to proceed outside the constraints of malpractice regulations.

The ruling marks a significant victory for the plaintiffs, who faced limitations on their ability to seek justice due to the initial classification of their claims. The decision follows legislative changes that excluded sexual assault from being considered medical malpractice, though these changes did not apply retroactively to the Broadbent case. The lawsuit alleges inappropriate and harmful conduct by Broadbent, including touching patients without explanation and using his position to commit sexual assaults. Broadbent, who has denied the allegations, has agreed to cease practicing medicine while under investigation and faces criminal charges of forcible sexual abuse. The case will now return to the lower court for further proceedings, offering the plaintiffs a renewed opportunity to seek justice.

Sunday, September 15, 2024

White House to require insurers pay for mental health the same as physical health

Nathaniel Weixel
TheHill.com
Originally posted 9 Sept 24

Health insurers will be required to cover mental health care and addiction services the same as any other condition under a highly anticipated final rule being released Monday by the Biden administration. 

The move is part of the administration’s ongoing battle with health insurers, who officials say are skirting a 2008 law requiring plans that cover mental health and substance use care benefits do so at the same level as physical health care benefits.

The health insurance industry is likely to challenge the rule, saying the administration did not have the authority to issue it to begin with.  

It argued the proposed requirements were unworkable and an unfunded government mandate that would cause employers to stop covering behavioral health services. 

Essentially, any financial requirements and treatment limitations like copays, coinsurance and visit limits imposed on mental health and substance use disorder benefits can’t be more restrictive than the ones that apply to all medical and surgical benefits. 

“Mental health and substance use disorder benefits should not come with any special roadblocks,” Lisa Gomez, the assistant secretary for employee benefits security at the Department of Labor, told reporters at a press conference. 


Here are some thoughts:

The Biden administration has released a final rule requiring health insurers to cover mental health care and addiction services at the same level as physical health care benefits. This move aims to enforce the 2008 Mental Health Parity and Addiction Equity Act, which has been skirted by insurers. The rule mandates equal financial requirements and treatment limitations for mental health and substance use disorder benefits as for medical and surgical benefits. Health plans must conduct comparative analyses to ensure adequate access to care and comply with the rule to remain competitive. The administration believes this will expand mental health coverage, but the health insurance industry and some Republicans argue it's an overreach that will increase premiums and burden health plans. The rule is part of the administration's effort to address the worsening mental health and substance use crisis, with most people with disorders not receiving treatment due to prohibitive costs and limited access.

Saturday, September 14, 2024

Should psychotherapists conduct visual assessments of nonsuicidal self-injury wounds?

Westers, N. J. (2024).
Psychotherapy, 61(3), 250–258.

Abstract

Beneficence and nonmaleficence are key ethical principles toward which psychotherapists consistently strive. When patients engage in nonsuicidal self-injury (NSSI) during the course of psychotherapy, therapists may feel responsible for visually assessing the severity of the NSSI wound in order to benefit their patients and keep them from harm. However, there are no guidelines for conducting these visual assessments, and there is no research exploring their effects on patients. This article considers the ethical implications of visually examining NSSI wounds; discusses psychotherapist scope of practice and competence; draws attention to relevant ethical standards; underscores risk management, liability, and standard of care; and addresses the risk of suicide or accidental death resulting from NSSI. It also provides ethical guidance for conducting effective verbal assessments of NSSI wounds and offers suggestions for navigating complex clinical situations, such as when patients routinely and spontaneously show their therapists their wounds and how psychotherapists should handle assessments and interventions related to NSSI scars. It ends with implications for training and therapeutic practice.

Impact Statement

Question: How should psychotherapists navigate assessment of nonsuicidal self-injury (NSSI) wounds, and how does this inform their work with auxiliary treatment team members such as medical professionals and parents?

Findings: This article discusses the scope of practice of psychology, individual psychotherapist competence, and risk management to critically evaluate if and how therapists should assess NSSI wounds. 

Meaning: There may be times when briefly looking at NSSI wounds is appropriate in psychotherapy, but visually assessing NSSI wounds is not within the scope of practice of psychology and may not protect patients from harm.

Next Steps: Research should examine with patients if and how their psychotherapist conducted visual assessments of their NSSI wounds, by whom these assessments were initiated, how they affected the patient experience, and if they resulted in help or harm.

The article is paywalled.

Here are some thoughts:

Current research lacks data on the impact of visually or verbally assessing NSSI wounds on patients. This article argues that visual assessment of NSSI wounds is outside the scope of practice for psychologists and can be potentially harmful. Therefore, psychologists need to be aware of interpersonal boundaries, clinical literature, and ethical standards. Instead, verbal assessment is recommended as best practice. Effective verbal assessment techniques include open-ended questions about wound care, pain, and medical attention, while maintaining a respectful and curious demeanor. Therapists should prioritize patient safety and refer patients to medical professionals when necessary. Ultimately, a balance between patient care and ethical boundaries should guide clinical practice.

Friday, September 13, 2024

Artists Score Major Win in Copyright Case Against AI Art Generators

Winston Cho
Hollywood Reporter
Originally posted 13 August 24

Artists suing generative artificial intelligence art generators have cleared a major hurdle in a first-of-its-kind lawsuit over the uncompensated and unauthorized use of billions of images downloaded from the internet to train AI systems, with a federal judge allowing key claims to move forward.

U.S. District Judge William Orrick on Monday advanced all copyright infringement and trademark claims in a pivotal win for artists. He found that Stable Diffusion, Stability’s AI tool that can create hyperrealistic images in response to a prompt of just a few words, may have been “built to a significant extent on copyrighted works” and created with the intent to “facilitate” infringement. The order could entangle in the litigation any AI company that incorporated the model into its products.


Here are some thoughts:

A federal judge has allowed key claims to move forward in a lawsuit filed by artists against generative AI art generators, including Stability AI and Midjourney. The lawsuit alleges that these companies used billions of images downloaded from the internet to train their AI systems without permission or compensation.

U.S. District Judge William Orrick found that Stability's AI tool, Stable Diffusion, may have been built using copyrighted works and created with the intent to facilitate infringement. The judge advanced all copyright infringement and trademark claims, paving the way for discovery.

During discovery, artists' lawyers will seek information on how Stability and Runway built Stable Diffusion and the LAION data set. The case's outcome could entangle any AI company that incorporated the model into its products and may impact the widespread adoption of AI in the movie-making process.

Concept artists like Karla Ortiz, who brought the lawsuit, fear displacement due to AI tools. The case raises novel legal issues, including whether AI-generated works are eligible for copyright protection. The court's ruling could have significant implications for the future of AI in the creative industry.

Defendants argued that the lawsuit must identify specific works used for training, but the court disagreed. The case will proceed to discovery, with potential consequences for AI companies that used the Stable Diffusion model.

Thursday, September 12, 2024

Generative AI Has a 'Shoplifting' Problem

Kate Knibbs
Wired.com
Originally posted 8 AUG 24

Bill Gross made his name in the tech world in the 1990s, when he came up with a novel way for search engines to make money on advertising. Under his pricing scheme, advertisers would pay when people clicked on their ads. Now, the “pay-per-click” guy has founded a startup called ProRata, which has an audacious, possibly pie-in-the-sky business model: “AI pay-per-use.”

Gross, who is CEO of the Pasadena, California, company, doesn’t mince words about the generative AI industry. “It’s stealing,” he says. “They’re shoplifting and laundering the world’s knowledge to their benefit.”

AI companies often argue that they need vast troves of data to create cutting-edge generative tools and that scraping data from the internet, whether it’s text from websites, video or captions from YouTube, or books pilfered from pirate libraries, is legally allowed. Gross doesn’t buy that argument. “I think it’s bullshit,” he says.


Here are some thoughts:

Bill Gross, founder of ProRata, is revolutionizing the generative AI industry with a novel "AI pay-per-use" business model. Gross criticizes the industry for stealing and laundering knowledge without fair compensation. ProRata aims to address this issue by arranging revenue-sharing deals between AI companies and content creators, ensuring fair payment for used work.

ProRata's approach involves using algorithms to break down AI output into components, identifying sources, and attributing percentages to copyright holders for payment. The company has already secured partnerships with prominent companies like Universal Music Group, Financial Times, and The Atlantic. Additionally, ProRata is launching a subscription chatbot-style search engine in October, which will use exclusively licensed data, setting a new standard for the industry.

The company's model offers a solution to the ongoing copyright lawsuits against AI companies, providing a fair and transparent way to compensate content creators. ProRata's emergence is part of a larger trend, with other startups and nonprofits, like TollBit and Dataset Providers Alliance, also entering the training-data licensing space. Gross plans to license ProRata's attribution and payment technologies to other companies, including major AI players, with the goal of making the system affordable and widely adopted, similar to a Visa or Mastercard fee.

Overall, ProRata's innovative approach addresses the pressing issue of fair compensation in the generative AI industry. With its impressive partnerships and promising technology, ProRata is poised to make a significant impact and potentially transform the industry's practices.

Wednesday, September 11, 2024

Second Circuit finds post-9/11 congressional ‘torture’ report not subject to FOIA

Nika Schoonover
Courthouse News
Originally posted 5 Aug 24

A report produced by Congress on the CIA’s post-9/11 detention and interrogation program is not covered by the federal freedom of information law, a Second Circuit panel found Monday.

In the aftermath of the terrorist attacks of September 11, 2001, the Senate Select Committee on Intelligence generated a report on the Detention and Interrogation Program conducted by the CIA. The committee then transmitted the report to various agencies covered under the federal Freedom of Information Act.

In late 2014, the committee produced only an executive summary of its findings which revealed the CIA’s interrogation tactics were more gruesome and ineffective than previously acknowledged. The heavily redacted report showed that interrogations included waterboarding, sleep deprivation and sexual humiliation such as rectal feeding.

In the Second Circuit panel’s Monday ruling, the court cited another Second Circuit decision from 2022, Behar v. U.S. Department of Homeland Security, where the court determined that an entity not covered by FOIA, such as Congress, would have to show that it manifested a clear control of the documents, and that the receiving agency is not free to “use and dispose of the documents as it sees fit.”


Here are some thoughts:

A recent decision by the Second Circuit panel has found that a report produced by Congress on the CIA's post-9/11 detention and interrogation program is not covered under the federal Freedom of Information Act (FOIA). The report, which details the CIA's use of enhanced interrogation techniques such as waterboarding and sleep deprivation, was generated by the Senate Select Committee on Intelligence in the aftermath of the 9/11 attacks.

The court's ruling centered on the issue of control and ownership of the report, citing a previous decision in Behar v. U.S. Department of Homeland Security. The panel found that Congress had manifested a clear intent to control the report at the time of its creation, and that subsequent actions did not vitiate this intent.
The decision affirms a lower court's dismissal of a complaint filed by Douglas Cox, a law professor who had submitted FOIA requests to various federal agencies for access to the report. Cox argued that the report should be subject to FOIA disclosure, but the court found that he had failed to address a relevant precedent in his oral arguments.

Legal experts have noted that the exclusion of the document from FOIA is a matter of Congress' intent to control the document, highlighting the lack of transparency in congressional records. The decision underscores the limitations of FOIA in accessing sensitive information, particularly when it comes to congressional records.

Tuesday, September 10, 2024

Are people too flawed, ignorant, and tribal for open societies?

Dan Williams
Conspicuous Cognition
Originally posted 13 July 24

This week and the next, I am on the faculty for a two-week summer school in Budapest on “The Human Mind and the Open Society”. Organised by Thom Scott-Phillips and Christophe Heintz, the summer school focuses “on how understanding the human mind as a tool for navigating a richly social existence can inform our understanding and advocacy of open society, and the ideals it represents”:
“The notion of open society is an attempt to answer the question of how we can effectively live together in large and modern environments. Its ideals include commitments to the rule of law, freedom of association, democratic institutions, and the free use of reason and critical analysis. Arguments in favour of these ideals necessarily depend on assumptions—sometimes hidden and unexamined—about the human mind.”

I agreed to take part in the summer school because it would allow me to interact with a group of fantastic researchers and because it brings together two of my favourite things: (1) evolutionary social science and (2) the ideals of open, liberal societies—ideals that I regard as some of humanity’s most important and most fragile achievements.

In my role, I am giving two lectures on “The epistemic challenges of open societies”. The first lecture explores four factors that distort the capacity of citizens within open societies to acquire accurate beliefs about the world: complexity, invisibility, ignorance, and tribalism.

The info is here.

Here are some thoughts:

The article/blog post discusses the concept of open societies, emphasizing two key ideals: democracy and the free exchange of ideas. Open societies are characterized by political equality, typically expressed through the principle of "one person, one vote," and they promote radical freedom of thought and expression, as advocated by J.S. Mill. The text argues that these features are believed to enhance the social production of knowledge and understanding, although this optimism may be challenged by the complexities of modern societies.

Complexity and Public Opinion

Modern societies face intricate issues like climate change and economic policies, which ordinary citizens are expected to address. However, it is questioned whether they are equipped to do so, as highlighted by Walter Lippmann's critique of democracy, which points out that even experts struggle to grasp these complexities. Furthermore, the phenomenon of "rational ignorance" is introduced, explaining that individuals may choose not to become politically informed due to the minimal impact their vote has on outcomes, leading to widespread political ignorance.

Motivated Cognition and Coalitional Psychology

Despite a minority of highly engaged citizens, the text notes that those involved in politics often exhibit biases due to motivated cognition, where beliefs are shaped by personal interests rather than objective truth. This is linked to coalitional psychology, where individuals advocate for their political groups, distorting their understanding of reality to align with group interests. The article concludes that while open societies rely on informed electorates, the dynamics of motivated cognition and coalitional allegiances complicate the pursuit of truth and informed decision-making in democratic contexts.

Monday, September 9, 2024

Can astrologers use astrological charts to understand people's character and lives?

Ferretti, A. (2024, July 29).
Clearer Thinking.

Astrology is very popular — both Gallup and YouGov report that about 25% of Americans believe that the position of the stars and planets can affect people's lives, with an additional 20% of people reporting being uncertain about astrology’s legitimacy.

Previously, we tested whether facts about a person's life can be predicted using their astrological sun signs (such as Pisces, Aries, etc.). A number of astrologers criticized this work, saying that of course we found that sun signs don't predict facts about a person's life, because that's baby or tabloid astrology. Real astrologers use people's entire astrological charts to glean insights about them and their lives. 

And they had a good point! Despite sun sign astrology being popular, most astrologers use entire astrological charts, not merely people's sun signs. Here are some examples of the feedback we received:


Inspired by these critiques, we enlisted the help of six astrologers, and with their feedback and guidance, we designed a new test to see whether astrologers can truly gain insights about people from entire astrological charts!

Here are some thoughts:

A recent study put the claims of astrology to the test, examining the ability of 152 astrologers to accurately match individuals with their corresponding natal charts. Despite their confidence in their abilities, the astrologers performed no better than chance, with none correctly matching more than 5 out of 12 charts. The study found no correlation between experience and accuracy, and even the most experienced astrologers failed to perform better than the rest.

The study highlights the importance of scientifically testing claims, particularly those that are ambiguous or unsubstantiated. To test a claim, one must first make it precise, choose a measurable outcome, design a study, and then analyze the results. In this case, the study's findings provide strong evidence against the claim that astrology can accurately match individuals with their natal charts.

The results also reveal a striking disconnect between the astrologers' confidence in their abilities and their actual performance. This raises questions about the validity of astrology and the need for more rigorous scientific testing of its claims. By applying the scientific method to such claims, we can separate fact from fiction and gain a deeper understanding of the world around us.

Sunday, September 8, 2024

Unpacking the pursuit of happiness: Being concerned about happiness but not aspiring to happiness is linked with negative meta-emotions and worse well-being.

Zerwas, F. K., Ford, B. Q., John, O. P., & Mauss, I. B. (2024).
Emotion. Advance online publication.

Abstract

Previous work suggests that sometimes the more people value happiness, the less happy they are. For whom and why is this the case? To answer these questions, we examined a model of happiness pursuit that disentangles two previously conflated individual differences related to valuing happiness. The first individual difference operates at the strength of the value itself and involves viewing happiness as a very important goal (i.e., aspiring to happiness). The second individual difference occurs later in the process of pursuing happiness and involves judging one’s levels of happiness (i.e., concern about happiness). This model predicts that aspiring to happiness is relatively innocuous. Conversely, being concerned about happiness leads people to judge their happiness, thereby infusing negativity (i.e., negative meta-emotions) into potentially positive events, which, in turn, interferes with well-being. We tested these hypotheses using cross-sectional, daily-diary, and longitudinal methods in student and community samples, collected between 2009 and 2020, which are diverse in gender, ethnicity, age, and geographic location (Ntotal = 1,815). In Studies 1a and 1b, aspiring to happiness and concern about happiness represented distinct individual differences. In Study 2, concern about happiness (but not aspiring to happiness) was associated with lower well-being cross-sectionally and longitudinally. In Study 3, these links between concern about happiness and worse well-being were partially accounted for by experiencing greater negative meta-emotions during daily positive events. These findings suggest that highly valuing happiness is not inherently problematic; however, concern and judgment about one’s happiness can undermine it.

The research is paywalled.

Here are some thoughts:

This research suggests that constantly judging your own happiness can have negative consequences for your well-being. In a series of experiments involving over 1,800 participants, researchers found that individuals who worried about their level of happiness experienced lower life satisfaction, greater negativity, and increased depressive symptoms.

Societal pressures often perpetuate the idea that constant happiness is necessary for well-being. However, the study reveals that allowing yourself to experience emotions without judgment can be a more effective approach to achieving happiness. Contrary to previous findings, the pursuit of happiness itself did not have detrimental effects, but rather the act of judging one's own happiness that led to negative outcomes.

The study's results highlight the importance of accepting your emotions, both positive and negative, without trying to measure up to unrealistic expectations. By doing so, individuals can cultivate a more authentic and fulfilling approach to happiness, rather than getting caught up in self-criticism and disappointment.

Saturday, September 7, 2024

Self-Consuming Generative Models GO MAD

Alemohammad, S., et al. (n.d.).
OpenReview.

Abstract:

Seismic advances in generative AI algorithms for imagery, text, and other data types have led to the temptation to use AI-synthesized data to train next-generation models. Repeating this process creates an autophagous ("self-consuming") loop whose properties are poorly understood. We conduct a thorough analytical and empirical analysis using state-of-the-art generative image models of three families of autophagous loops that differ in how fixed or fresh real training data is available through the generations of training and whether the samples from previous-generation models have been biased to trade off data quality versus diversity. Our primary conclusion across all scenarios is that without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease. We term this condition Model Autophagy Disorder (MAD), by analogy to mad cow disease, and show that appreciable MADness arises in just a few generations.

Here are some thoughts:

This study explored the potential consequences of autophagous loops in generative models, where models train future models using synthetic data. This phenomenon, known as Model Autophagy Disorder (MAD), can lead to a degradation of model quality and diversity, ultimately poisoning the entire Internet's data quality and diversity if left uncontrolled.

The researchers identified three families of autophagous loops and found that sampling bias plays a crucial role in the development of MAD. Without sufficient fresh real data, future generative models will inevitably suffer from MAD, leading to decreased quality and diversity. This has significant implications for practitioners working with generative models, particularly those using synthetic training data.

To mitigate the risks of MAD, practitioners can take steps to control the ratio of real-to-synthetic training data and identify synthetic data through watermarking or other methods. However, watermarking introduces hidden artifacts that can be amplified by autophagy, highlighting the need for autophagy-aware watermarking techniques. Future research should focus on developing these techniques, examining the effects of MADness on downstream tasks, and exploring the implications for other data types, such as language models.

The study's conclusions serve as a warning for practitioners, highlighting the need for careful consideration of the potential risks and consequences of autophagous loops. As generative models become increasingly ubiquitous, it is essential to address the risks associated with MAD to prevent a decline in data quality and diversity. By understanding the causes and consequences of MAD, practitioners can take steps to prevent its occurrence and ensure the continued development of high-quality generative models.