Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Machine Learning. Show all posts
Showing posts with label Machine Learning. Show all posts

Sunday, August 24, 2025

Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI

Goktas, P., & Grzybowski, A. (2025).
Journal of clinical medicine, 14(5), 1605.

Abstract

Background/Objectives: Artificial intelligence (AI) is transforming healthcare, enabling advances in diagnostics, treatment optimization, and patient care. Yet, its integration raises ethical, regulatory, and societal challenges. Key concerns include data privacy risks, algorithmic bias, and regulatory gaps that struggle to keep pace with AI advancements. This study aims to synthesize a multidisciplinary framework for trustworthy AI in healthcare, focusing on transparency, accountability, fairness, sustainability, and global collaboration. It moves beyond high-level ethical discussions to provide actionable strategies for implementing trustworthy AI in clinical contexts. 

Methods: A structured literature review was conducted using PubMed, Scopus, and Web of Science. Studies were selected based on relevance to AI ethics, governance, and policy in healthcare, prioritizing peer-reviewed articles, policy analyses, case studies, and ethical guidelines from authoritative sources published within the last decade. The conceptual approach integrates perspectives from clinicians, ethicists, policymakers, and technologists, offering a holistic "ecosystem" view of AI. No clinical trials or patient-level interventions were conducted. 

Results: The analysis identifies key gaps in current AI governance and introduces the Regulatory Genome-an adaptive AI oversight framework aligned with global policy trends and Sustainable Development Goals. It introduces quantifiable trustworthiness metrics, a comparative analysis of AI categories for clinical applications, and bias mitigation strategies. Additionally, it presents interdisciplinary policy recommendations for aligning AI deployment with ethical, regulatory, and environmental sustainability goals. This study emphasizes measurable standards, multi-stakeholder engagement strategies, and global partnerships to ensure that future AI innovations meet ethical and practical healthcare needs. 

Conclusions: Trustworthy AI in healthcare requires more than technical advancements-it demands robust ethical safeguards, proactive regulation, and continuous collaboration. By adopting the recommended roadmap, stakeholders can foster responsible innovation, improve patient outcomes, and maintain public trust in AI-driven healthcare.


Here are some thoughts:

This article is important to psychologists as it addresses the growing role of artificial intelligence (AI) in healthcare and emphasizes the ethical, legal, and societal implications that psychologists must consider in their practice. It highlights the need for transparency, accountability, and fairness in AI-based health technologies, which can significantly influence patient behavior, decision-making, and perceptions of care. The article also touches on issues such as patient trust, data privacy, and the potential for AI to reinforce biases, all of which are critical psychological factors that impact treatment outcomes and patient well-being. Additionally, it underscores the importance of integrating human-centered design and ethics into AI development, offering psychologists insights into how they can contribute to shaping AI tools that align with human values, promote equitable healthcare, and support mental health in an increasingly digital world.

Thursday, January 30, 2025

Advancements in AI-driven Healthcare: A Comprehensive Review of Diagnostics, Treatment, and Patient Care Integration

Kasula, B. Y. (2024, January 18).
International Journal of Machine Learning
for Sustainable Development.
Volume 6 (1).

Abstract

This research paper presents a comprehensive review of the recent advancements in AI-
driven healthcare, focusing on diagnostics, treatment, and the integration of AI technologies in
patient care. The study explores the evolution of artificial intelligence applications in medical
imaging, diagnosis accuracy, personalized treatment plans, and the overall enhancement of
healthcare delivery. Ethical considerations and challenges associated with AI adoption in
healthcare are also discussed. The paper concludes with insights into the potential future
developments and the transformative impact of AI on the healthcare landscape.


Here are some thoughts:

This research paper provides a comprehensive review of recent advancements in AI-driven healthcare, focusing on diagnostics, treatment, and the integration of AI technologies in patient care. The study explores the evolution of artificial intelligence applications in medical imaging, diagnosis accuracy, personalized treatment plans, and the overall enhancement of healthcare delivery. It discusses the transformative impact of AI on healthcare, highlighting key achievements, challenges, and ethical considerations associated with its widespread adoption.

The paper examines AI's role in improving diagnostic accuracy, particularly in medical imaging, and its contribution to developing personalized treatment plans. It also addresses the ethical dimensions of AI in healthcare, including patient privacy, data security, and equitable distribution of AI-driven healthcare benefits. The research emphasizes the need for a holistic approach to AI integration in healthcare, calling for collaboration between healthcare professionals, technologists, and policymakers to navigate the evolving landscape successfully.

It is important for psychologists to understand the content of this article for several reasons. Firstly, AI is increasingly being applied in mental health diagnosis and treatment, as mentioned in the paper's references. Psychologists need to be aware of these advancements to stay current in their field and potentially incorporate AI-driven tools into their practice. Secondly, the ethical considerations discussed in the paper, such as patient privacy and data security, are equally relevant to psychological practice. Understanding these issues can help psychologists navigate the ethical challenges that may arise with the integration of AI in mental health care.

Moreover, the paper's emphasis on personalized medicine and treatment plans is particularly relevant to psychology, where individualized approaches are often crucial. By understanding AI's potential in this area, psychologists can explore ways to enhance their treatment strategies and improve patient outcomes. Lastly, as healthcare becomes increasingly interdisciplinary, psychologists need to be aware of technological advancements in other medical fields to collaborate effectively with other healthcare professionals and provide comprehensive care to their patients.

Thursday, November 14, 2024

AI threatens to cement racial bias in clinical algorithms. Could it also chart a path forward?

Katie Palmer
STATNews.com
Originally posted 11 Sept 24

Here is an excerpt:

In the past four years, clinical medicine has been forced to reckon with the role of race in simpler iterations of these algorithms. Common calculators, used by doctors to inform care decisions, sometimes adjust their predictions depending on a patient’s race — perpetuating the false idea that race is a biological construct, not a social one.

Machine learning techniques could chart a path forward. They could allow clinical researchers to crunch reams of real-world patient records to deliver more nuanced predictions about health risks, obviating the need to rely on race as a crude — and sometimes harmful — proxy. But what happens, Gallifant asked his table of students, if that real-world data is tainted, unreliable? What happens to patients if researchers train their high-powered algorithms on data from biased tools like the pulse oximeter?

Over the weekend, Celi’s team of volunteer clinicians and data scientists explained, they’d go hunting for that embedded bias in a massive open-source clinical dataset, the first step to make sure it doesn’t influence clinical algorithms that impact patient care. The pulse oximeter continued to make the rounds to a student named Ady Suy — who, some day, wants to care for people whose concerns might be ignored, as a nurse or a pediatrician. “I’ve known people that didn’t get the care that they needed,” she said. “And I just really want to change that.”

At Brown and in events like this around the world, Celi and his team have been priming medicine’s next cohort of researchers and clinicians to cross-examine the data they intend to use. As scientists and regulators sound alarm bells about the risks of novel artificial intelligence, Celi believes the most alarming thing about AI isn’t its newness: It’s that it repeats an age-old mistake in medicine, continuing to use flawed, incomplete data to make decisions about patients.

“The data that we use to build AI reflects everything about the systems that we would like to disrupt,” said Celi: “Both the good and the bad.” And without action, AI stands to cement bias into the health care system at disquieting speed and scale.


Here are some thoughts:

In a recent event at Brown University, physician and data scientist Leo Celi led a workshop aimed at educating high school students and medical trainees about the biases present in medical data, particularly concerning the use of pulse oximeters, which often provide inaccurate readings for patients with darker skin tones. Celi emphasized the importance of addressing these biases as machine learning algorithms increasingly influence patient care decisions. The workshop involved hands-on activities where participants analyzed a large clinical dataset to identify embedded biases that could affect algorithmic predictions. Celi and his team highlighted the need for future researchers to critically examine the data they use, as flawed data can perpetuate existing inequities in healthcare. The event underscored the urgent need for diverse perspectives in AI development to ensure algorithms are fair and equitable, as well as the importance of improving data collection methods to better represent marginalized groups.

Monday, October 7, 2024

Prediction of Future Parkinson Disease Using Plasma Proteins Combined With Clinical-Demographic Measures

You, J., et al. (2024).
Neurology, 103(3).

Abstract

Background and Objectives

Identification of individuals at high risk of developing Parkinson disease (PD) several years before diagnosis is crucial for developing treatments to prevent or delay neurodegeneration. This study aimed to develop predictive models for PD risk that combine plasma proteins and easily accessible clinical-demographic variables.

Results

A total of 52,503 participants without PD (median age 58, 54% female) were included. Over a median follow-up duration of 14.0 years, 751 individuals were diagnosed with PD (median age 65, 37% female). Using a forward selection approach, we selected a panel of 22 plasma proteins for optimal prediction. Using an ensemble tree-based Light Gradient Boosting Machine (LightGBM) algorithm, the model achieved an area under the receiver operating characteristic curve (AUC) of 0.800 (95% CI 0.785–0.815). The LightGBM prediction model integrating both plasma proteins and clinical-demographic variables demonstrated enhanced predictive accuracy, with an AUC of 0.832 (95% CI 0.815–0.849). Key predictors identified included age, years of education, history of traumatic brain injury, and serum creatinine. The incorporation of 11 plasma proteins (neurofilament light, integrin subunit alpha V, hematopoietic PGD synthase, histamine N-methyltransferase, tubulin polymerization promoting protein family member 3, ectodysplasin A2 receptor, Latexin, interleukin-13 receptor subunit alpha-1, BAG family molecular chaperone regulator 3, tryptophanyl-TRNA synthetase, and secretogranin-2) augmented the model's predictive accuracy. External validation in the PPMI cohort confirmed the model's reliability, producing an AUC of 0.810 (95% CI 0.740–0.873). Notably, alterations in these predictors were detectable several years before the diagnosis of PD.

Discussion

Our findings support the potential utility of a machine learning-based model integrating clinical-demographic variables with plasma proteins to identify individuals at high risk for PD within the general population. Although these predictors have been validated by PPMI, additional validation in a more diverse population reflective of the general community is essential.

The article is cited above, but paywalled.

Here are some thoughts:

A recent study published in Neurology demonstrates the potential for early detection of Parkinson's disease (PD) using machine learning techniques. Researchers developed a predictive model that analyzes blood proteins in conjunction with clinical data, allowing for the identification of individuals at high risk of developing PD up to 15 years before symptoms appear. The study involved over 50,000 participants from the UK Biobank, focusing on 1,463 different blood plasma proteins. By employing machine learning to identify patterns in protein levels alongside clinical information—such as age, history of brain injuries, and blood creatinine levels—the researchers were able to achieve significant accuracy in predicting Parkinson's risk.

The findings revealed 22 specific proteins that are significantly associated with the risk of developing PD, including neurofilament light (NfL), which is linked to brain cell damage, as well as various proteins involved in inflammation and muscle function. This model not only offers a non-invasive and cost-effective screening method but also presents opportunities for early intervention and improved disease management, potentially enabling the development and assessment of neuroprotective treatments.
However, the study does have limitations that warrant consideration. The participant population was predominantly of European descent, which may limit the generalizability of the findings to more diverse groups. Additionally, the reliance on medical records for PD diagnosis raises concerns about potential misdiagnoses. Future research will need to validate the model in diverse populations and utilize more precise measurement techniques for protein levels. Longitudinal studies that incorporate repeated measurements could further enhance the predictive power of the model.

Overall, this groundbreaking research offers new hope for the early detection and intervention of Parkinson's disease, potentially revolutionizing the approach to managing this neurodegenerative disorder.

Tuesday, July 16, 2024

Robust and interpretable AI-guided marker for early dementia prediction in real-world clinical settings

Lee, L. Y., et al. (2024).
EClinicalMedicine, 102725.

Background

Predicting dementia early has major implications for clinical management and patient outcomes. Yet, we still lack sensitive tools for stratifying patients early, resulting in patients being undiagnosed or wrongly diagnosed. Despite rapid expansion in machine learning models for dementia prediction, limited model interpretability and generalizability impede translation to the clinic.

Methods

We build a robust and interpretable predictive prognostic model (PPM) and validate its clinical utility using real-world, routinely-collected, non-invasive, and low-cost (cognitive tests, structural MRI) patient data. To enhance scalability and generalizability to the clinic, we: 1) train the PPM with clinically-relevant predictors (cognitive tests, grey matter atrophy) that are common across research and clinical cohorts, 2) test PPM predictions with independent multicenter real-world data from memory clinics across countries (UK, Singapore).

Interpretation

Our results provide evidence for a robust and explainable clinical AI-guided marker for early dementia prediction that is validated against longitudinal, multicenter patient data across countries, and has strong potential for adoption in clinical practice.


Here is a summary and some thoughts:

Cambridge scientists have developed an AI tool capable of predicting with high accuracy whether individuals with early signs of dementia will remain stable or develop Alzheimer’s disease. This tool utilizes non-invasive, low-cost patient data such as cognitive tests and MRI scans to make its predictions, showing greater sensitivity than current diagnostic methods. The algorithm was able to correctly identify 82% of individuals who would develop Alzheimer’s and 81% of those who wouldn’t, surpassing standard clinical markers. This advancement could reduce the reliance on invasive and costly diagnostic tests and allow for early interventions, potentially improving treatment outcomes.

The machine learning model stratifies patients into three groups: those whose symptoms remain stable, those who progress slowly to Alzheimer’s, and those who progress rapidly. This stratification could help clinicians tailor treatments and closely monitor high-risk individuals. Validated with real-world data from memory clinics in the UK and Singapore, the tool demonstrates its applicability in clinical settings. The researchers aim to extend this model to other forms of dementia and incorporate additional data types, with the ultimate goal of providing precise diagnostic and treatment pathways, thereby accelerating the discovery of new treatments for dementia.

Tuesday, April 2, 2024

The Puzzle of Evaluating Moral Cognition in Artificial Agents

Reinecke, M. G., Mao, Y., et al. (2023).
Cognitive Science, 47(8).

Abstract

In developing artificial intelligence (AI), researchers often benchmark against human performance as a measure of progress. Is this kind of comparison possible for moral cognition? Given that human moral judgment often hinges on intangible properties like “intention” which may have no natural analog in artificial agents, it may prove difficult to design a “like-for-like” comparison between the moral behavior of artificial and human agents. What would a measure of moral behavior for both humans and AI look like? We unravel the complexity of this question by discussing examples within reinforcement learning and generative AI, and we examine how the puzzle of evaluating artificial agents' moral cognition remains open for further investigation within cognitive science.

The link to the article is the hyperlink above.

Here is my summary:

This article delves into the challenges associated with assessing the moral decision-making capabilities of artificial intelligence systems. It explores the complexities of imbuing AI with ethical reasoning and the difficulties in evaluating their moral cognition. The article discusses the need for robust frameworks and methodologies to effectively gauge the ethical behavior of AI, highlighting the intricate nature of integrating morality into machine learning algorithms. Overall, it emphasizes the critical importance of developing reliable methods to evaluate the moral reasoning of artificial agents in order to ensure their responsible and ethical deployment in various domains.

Friday, February 10, 2023

Individual differences in (dis)honesty are represented in the brain's functional connectivity at rest

Speer, S. P., Smidts, A., & Boksem, M. A. (2022).
NeuroImage, 246, 118761.
https://doi.org/10.1016/j.neuroimage.2021.118761

Abstract

Measurement of the determinants of socially undesirable behaviors, such as dishonesty, are complicated and obscured by social desirability biases. To circumvent these biases, we used connectome-based predictive modeling (CPM) on resting state functional connectivity patterns in combination with a novel task which inconspicuously measures voluntary cheating to gain access to the neurocognitive determinants of (dis)honesty. Specifically, we investigated whether task-independent neural patterns within the brain at rest could be used to predict a propensity for (dis)honest behavior. Our analyses revealed that functional connectivity, especially between brain networks linked to self-referential thinking (vmPFC, temporal poles, and PCC) and reward processing (caudate nucleus), reliably correlates, in an independent sample, with participants’ propensity to cheat. Participants who cheated the most also scored highest on several self-report measures of impulsivity which underscores the generalizability of our results. Notably, when comparing neural and self-report measures, the neural measures were found to be more important in predicting cheating propensity.

Significance statement

Dishonesty pervades all aspects of life and causes enormous economic losses. However, because the underlying mechanisms of socially undesirable behaviors are difficult to measure, the neurocognitive determinants of individual differences in dishonesty largely remain unknown. Here, we apply machine-learning methods to stable patterns of neural connectivity to investigate how dispositions toward (dis)honesty, measured by an innovative behavioral task, are encoded in the brain. We found that stronger connectivity between brain regions associated with self-referential thinking and reward are predictive of the propensity to be honest. The high predictive accuracy of our machine-learning models, combined with the reliable nature of resting-state functional connectivity, which is uncontaminated by the social-desirability biases to which self-report measures are susceptible, provides an excellent avenue for the development of useful neuroimaging-based biomarkers of socially undesirable behaviors.

Discussion

Employing connectome-based predictive modeling (CPM) in combination with the innovative Spot-The-Differences task, which allows for inconspicuously measuring cheating, we identified a functional connectome that reliably predicts a disposition toward (dis)honesty in an independent sample. We observed a Pearson correlation between out-of-sample predicted and actual cheatcount (r = 0.40) that resides on the higher side of the typical range of correlations (between r = 0.2 and r = 0.5) reported in previous studies employing CPM (Shen et al., 2017). Thus, functional connectivity within the brain at rest predicts whether someone is more honest or more inclined to cheat in our task.

In light of previous research on moral decisions, the regions we identified in our resting state analysis can be associated with two networks frequently found to be involved in moral decision making. First, the vmPFC, the bilateral temporal poles and the PCC have consistently been associated with self-referential thinking. For example, it has been found that functional connectivity between these areas during rest is associated with higher-level metacognitive operations such as self-reflection, introspection and self-awareness (Gusnard et al., 2001; Meffert et al., 2013; Northoff et al., 2006; Vanhaudenhuyse et al., 2011). Secondly, the caudate nucleus, which has been found to be involved in anticipation and valuation of rewards (Ballard and Knutson, 2009; Knutson et al., 2001) can be considered an important node in the reward network (Bartra et al., 2013). Participants with higher levels of activation in the reward network, in anticipation of rewards, have previously been found to indeed be more dishonest (Abe and Greene, 2014).

Thursday, April 7, 2022

How to Prevent Robotic Sociopaths: A Neuroscience Approach to Artificial Ethics

Christov-Moore, L., Reggente, N.,  et al.
https://doi.org/10.31234/osf.io/6tn42

Abstract

Artificial intelligence (AI) is expanding into every niche of human life, organizing our activity, expanding our agency and interacting with us to an increasing extent. At the same time, AI’s efficiency, complexity and refinement are growing quickly. Justifiably, there is increasing concern with the immediate problem of engineering AI that is aligned with human interests.

Computational approaches to the alignment problem attempt to design AI systems to parameterize human values like harm and flourishing, and avoid overly drastic solutions, even if these are seemingly optimal. In parallel, ongoing work in service AI (caregiving, consumer care, etc.) is concerned with developing artificial empathy, teaching AI’s to decode human feelings and behavior, and evince appropriate, empathetic responses. This could be equated to cognitive empathy in humans.

We propose that in the absence of affective empathy (which allows us to share in the states of others), existing approaches to artificial empathy may fail to produce the caring, prosocial component of empathy, potentially resulting in superintelligent, sociopath-like AI. We adopt the colloquial usage of “sociopath” to signify an intelligence possessing cognitive empathy (i.e., the ability to infer and model the internal states of others), but crucially lacking harm aversion and empathic concern arising from vulnerability, embodiment, and affective empathy (which permits for shared experience). An expanding, ubiquitous intelligence that does not have a means to care about us poses a species-level risk.

It is widely acknowledged that harm aversion is a foundation of moral behavior. However, harm aversion is itself predicated on the experience of harm, within the context of the preservation of physical integrity. Following from this, we argue that a “top-down” rule-based approach to achieving caring, aligned AI may be unable to anticipate and adapt to the inevitable novel moral/logistical dilemmas faced by an expanding AI. It may be more effective to cultivate prosociality from the bottom up, baked into an embodied, vulnerable artificial intelligence with an incentive to preserve its real or simulated physical integrity. This may be achieved via optimization for incentives and contingencies inspired by the development of empathic concern in vivo. We outline the broad prerequisites of this approach and review ongoing work that is consistent with our rationale.

If successful, work of this kind could allow for AI that surpasses empathic fatigue and the idiosyncrasies, biases, and computational limits of human empathy. The scaleable complexity of AI may allow it unprecedented capability to deal proportionately and compassionately with complex, large-scale ethical dilemmas. By addressing this problem seriously in the early stages of AI’s integration with society, we might eventually produce an AI that plans and behaves with an ingrained regard for the welfare of others, aided by the scalable cognitive complexity necessary to model and solve extraordinary problems.

Tuesday, March 9, 2021

How social learning amplifies moral outrage expression in online social networks

Brady, W. J., McLoughlin, K. L., et al.
(2021, January 19).
https://doi.org/10.31234/osf.io/gf7t5

Abstract

Moral outrage shapes fundamental aspects of human social life and is now widespread in online social networks. Here, we show how social learning processes amplify online moral outrage expressions over time. In two pre-registered observational studies of Twitter (7,331 users and 12.7 million total tweets) and two pre-registered behavioral experiments (N = 240), we find that positive social feedback for outrage expressions increases the likelihood of future outrage expressions, consistent with principles of reinforcement learning. We also find that outrage expressions are sensitive to expressive norms in users’ social networks, over and above users’ own preferences, suggesting that norm learning processes guide online outrage expressions. Moreover, expressive norms moderate social reinforcement of outrage: in ideologically extreme networks, where outrage expression is more common, users are less sensitive to social feedback when deciding whether to express outrage. Our findings highlight how platform design interacts with human learning mechanisms to impact moral discourse in digital public spaces.

From the Conclusion

At first blush, documenting the role of reinforcement learning in online outrage expressions may seem trivial. Of course, we should expect that a fundamental principle of human behavior, extensively observed in offline settings, will similarly describe behavior in online settings. However, reinforcement learning of moral behaviors online, combined with the design of social media platforms, may have especially important social implications. Social media newsfeed algorithms can directly impact how much social feedback a given post receives by determining how many other users are exposed to that post. Because we show here that social feedback impacts users’ outrage expressions over time, this suggests newsfeed algorithms can influence users’ moral behaviors by exploiting their natural tendencies for reinforcement learning.  In this way, reinforcement learning on social media differs from reinforcement learning in other environments because crucial inputs to the learning process are shaped by corporate interests. Even if platform designers do not intend to amplify moral outrage, design choices aimed at satisfying other goals --such as profit maximization via user engagement --can indirectly impact moral behavior because outrage-provoking content draws high engagement. Given that moral outrage plays a critical role in collective action and social change, our data suggest that platform designers have the ability to influence the success or failure of social and political movements, as well as informational campaigns designed to influence users’ moral and political attitudes. Future research is required to understand whether users are aware of this, and whether making such knowledge salient can impact their online behavior.


People are more likely to express online "moral outrage" if they have either been rewarded for it in the past or it's common in their own social network.  They are even willing to express far more moral outrage than they genuinely feel in order to fit in.

Tuesday, December 1, 2020

Using Machine Learning to Generate Novel Hypotheses: Increasing Optimism About COVID-19 Makes People Less Willing to Justify Unethical Behaviors

Sheetal A, Feng Z, Savani K. 
Psychological Science. 2020;31(10):
1222-1235. 
doi:10.1177/0956797620959594

Abstract

How can we nudge people to not engage in unethical behaviors, such as hoarding and violating social-distancing guidelines, during the COVID-19 pandemic? Because past research on antecedents of unethical behavior has not provided a clear answer, we turned to machine learning to generate novel hypotheses. We trained a deep-learning model to predict whether or not World Values Survey respondents perceived unethical behaviors as justifiable, on the basis of their responses to 708 other items. The model identified optimism about the future of humanity as one of the top predictors of unethicality. A preregistered correlational study (N = 218 U.S. residents) conceptually replicated this finding. A preregistered experiment (N = 294 U.S. residents) provided causal support: Participants who read a scenario conveying optimism about the COVID-19 pandemic were less willing to justify hoarding and violating social-distancing guidelines than participants who read a scenario conveying pessimism. The findings suggest that optimism can help reduce unethicality, and they document the utility of machine-learning methods for generating novel hypotheses.

Here is how the research article begins:

Unethical behaviors can have substantial consequences in times of crisis. For example, in the midst of the COVID-19 pandemic, many people hoarded face masks and hand sanitizers; this hoarding deprived those who needed protective supplies most (e.g., medical workers and the elderly) and, therefore, put them at risk. Despite escalating deaths, more than 50,000 people were caught violating quarantine orders in Italy, putting themselves and others at risk. Governments covered up the scale of the pandemic in that country, thereby allowing the infection to spread in an uncontrolled manner. Thus, understanding antecedents of unethical behavior and identifying nudges to reduce unethical behaviors are particularly important in times of crisis.

Here is part of the Discussion

We formulated a novel hypothesis—that optimism reduces unethicality—on the basis of the deep-learning model’s finding that whether people think that the future of humanity is bleak or bright is a strong predictor of unethicality. This variable was not flagged as a top predictor either by the correlational analysis or by the lasso regression. Consistent with this idea, the results of a correlational study showed that people higher on dispositional optimism were less willing to engage in unethical behaviors. A following experiment found that increasing participants’ optimism about the COVID-19 epidemic reduced the extent to which they justified unethical behaviors related to the epidemic. The behavioral studies were conducted with U.S. American participants; thus, the cultural generalizability of the present findings is unclear. Future research needs to test whether optimism reduces unethical behavior in other cultural contexts.

Tuesday, October 13, 2020

Machine learning uncovers the most robust self-report predictors of relationship quality across 43 longitudinal couples studies

S. Joel and others
Proceedings of the National Academy of Sciences 
Aug 2020, 117 (32) 19061-19071
DOI: 10.1073/pnas.1917036117

Abstract

Given the powerful implications of relationship quality for health and well-being, a central mission of relationship science is explaining why some romantic relationships thrive more than others. This large-scale project used machine learning (i.e., Random Forests) to 1) quantify the extent to which relationship quality is predictable and 2) identify which constructs reliably predict relationship quality. Across 43 dyadic longitudinal datasets from 29 laboratories, the top relationship-specific predictors of relationship quality were perceived-partner commitment, appreciation, sexual satisfaction, perceived-partner satisfaction, and conflict. The top individual-difference predictors were life satisfaction, negative affect, depression, attachment avoidance, and attachment anxiety. Overall, relationship-specific variables predicted up to 45% of variance at baseline, and up to 18% of variance at the end of each study. Individual differences also performed well (21% and 12%, respectively). Actor-reported variables (i.e., own relationship-specific and individual-difference variables) predicted two to four times more variance than partner-reported variables (i.e., the partner’s ratings on those variables). Importantly, individual differences and partner reports had no predictive effects beyond actor-reported relationship-specific variables alone. These findings imply that the sum of all individual differences and partner experiences exert their influence on relationship quality via a person’s own relationship-specific experiences, and effects due to moderation by individual differences and moderation by partner-reports may be quite small. Finally, relationship-quality change (i.e., increases or decreases in relationship quality over the course of a study) was largely unpredictable from any combination of self-report variables. This collective effort should guide future models of relationships.

Significance

What predicts how happy people are with their romantic relationships? Relationship science—an interdisciplinary field spanning psychology, sociology, economics, family studies, and communication—has identified hundreds of variables that purportedly shape romantic relationship quality. The current project used machine learning to directly quantify and compare the predictive power of many such variables among 11,196 romantic couples. People’s own judgments about the relationship itself—such as how satisfied and committed they perceived their partners to be, and how appreciative they felt toward their partners—explained approximately 45% of their current satisfaction. The partner’s judgments did not add information, nor did either person’s personalities or traits. Furthermore, none of these variables could predict whose relationship quality would increase versus decrease over time.

Tuesday, February 11, 2020

How to build ethical AI

Carolyn Herzog
thehill.com
Originally posted 18 Jan 20

Here is an excerpt:

Any standard-setting in this field must be rooted in the understanding that data is the lifeblood of AI. The continual input of information is what fuels machine learning, and the most powerful AI tools require massive amounts of it. This of course raises issues of how that data is being collected, how it is being used, and how it is being safeguarded.

One of the most difficult questions we must address is how to overcome bias, particularly the unintentional kind. Let’s consider one potential application for AI: criminal justice. By removing prejudices that contribute to racial and demographic disparities, we can create systems that produce more uniform sentencing standards. Yet, programming such a system still requires weighting countless factors to determine appropriate outcomes. It is a human who must program the AI, and a person’s worldview will shape how they program machines to learn. That’s just one reason why enterprises developing AI must consider workforce diversity and put in place best practices and control for both intentional and inherent bias.

This leads back to transparency.

A computer can make a highly complex decision in an instant, but will we have confidence that it’s making a just one?

Whether a machine is determining a jail sentence, or approving a loan, or deciding who is admitted to a college, how do we explain how those choices were made? And how do we make sure the factors that went into that algorithm are understandable for the average person?

The info is here.

Sunday, January 19, 2020

A Right to a Human Decision

Aziz Z. Huq
Virginia Law Review, Vol. 105
U of Chicago, Public Law Working Paper No. 713

Abstract

Recent advances in computational technologies have spurred anxiety about a shift of power from human to machine decision-makers. From prison sentences to loan approvals to college applications, corporate and state actors increasingly lean on machine learning tools (a subset of artificial intelligence) to allocate goods and to assign coercion. Machine-learning tools are perceived to be eclipsing, even extinguishing, human agency in ways that sacrifice important individual interests. An emerging legal response to such worries is a right to a human decision. European law has already embraced the idea in the General Data Protection Regulation. American law, especially in the criminal justice domain, is already moving in the same direction. But no jurisdiction has defined with precision what that right entails, or furnished a clear justification for its creation.


This Article investigates the legal possibilities of a right to a human decision. I first define the conditions of technological plausibility for that right as applied against state action. To understand its technological predicates, I specify the margins along which machine decisions are distinct from human ones. Such technological contextualization enables a nuanced exploration of why, or indeed whether, the gaps that do separate human and machine decisions might have normative import. Based on this technological accounting, I then analyze the normative stakes of a right to a human decision. I consider three potential normative justifications: (a) an appeal to individual interests to participation and reason-giving; (b) worries about the insufficiently reasoned or individuated quality of state action; and (c) arguments based on negative externalities. A careful analysis of these three grounds suggests that there is no general justification for adopting a right to a human decision by the state. Normative concerns about insufficiently reasoned or accurate decisions, which have a particularly powerful hold on the legal imagination, are best addressed in other ways. Similarly, concerns about the ways that algorithmic tools create asymmetries of social power are not parried by a right to a human decision. Indeed, rather than firmly supporting a right to a human decision, available evidence tentatively points toward a countervailing ‘right to a well-calibrated machine decision’ as ultimately more normatively well- grounded.

The paper can be downloaded here.

Thursday, January 16, 2020

Ethics In AI: Why Values For Data Matter

Ethics in AIMarc Teerlink
forbes.com
Originally posted 18 Dec 19

Here is an excerpt:

Data Is an Asset, and It Must Have Values

Already, 22% of U.S. companies have attributed part of their profits to AI and advanced cases of (AI infused) predictive analytics.

According to a recent study SAP conducted in conjunction with the Economist’s Intelligent Unit, organizations doing the most with machine learning have experienced 43% more growth on average versus those who aren’t using AI and ML at all — or not using AI well.

One of their secrets: They treat data as an asset. The same way organizations treat inventory, fleet, and manufacturing assets.

They start with clear data governance with executive ownership and accountability (for a concrete example of how this looks, here are some principles and governance models that we at SAP apply in our daily work).

So, do treat data as an asset, because, no matter how powerful the algorithm, poor training data will limit the effectiveness of Artificial Intelligence and Predictive Analytics.

The info is here.

Tuesday, January 7, 2020

AI Is Not Similar To Human Intelligence. Thinking So Could Be Dangerous

Elizabeth Fernandez
Artificial intelligenceforbes.com
Originally posted 30 Nov 19

Here is an excerpt:

No doubt, these algorithms are powerful, but to think that they “think” and “learn” in the same way as humans would be incorrect, Watson says. There are many differences, and he outlines three.

The first - DNNs are easy to fool. For example, imagine you have a picture of a banana. A neural network successfully classifies it as a banana. But it’s possible to create a generative adversarial network that can fool your DNN. By adding a slight amount of noise or another image besides the banana, your DNN might now think the picture of a banana is a toaster. A human could not be fooled by such a trick. Some argue that this is because DNNs can see things humans can’t, but Watson says, “This disconnect between biological and artificial neural networks suggests that the latter lack some crucial component essential to navigating the real world.”

Secondly, DNNs need an enormous amount of data to learn. An image classification DNN might need to “see” thousands of pictures of zebras to identify a zebra in an image. Give the same test to a toddler, and chances are s/he could identify a zebra, even one that’s partially obscured, by only seeing a picture of a zebra a few times. Humans are great “one-shot learners,” says Watson. Teaching a neural network, on the other hand, might be very difficult, especially in instances where data is hard to come by.

Thirdly, neural nets are “myopic”. They can see the trees, so to speak, but not the forest. For example, a DNN could successfully label a picture of Kim Kardashian as a woman, an entertainer, and a starlet. However, switching the position of her mouth and one of her eyes actually improved the confidence of the DNN’s prediction. The DNN didn’t see anything wrong with that image. Obviously, something is wrong here. Another example - a human can say “that cloud looks like a dog”, whereas a DNN would say that the cloud is a dog.

The info is here.

Wednesday, November 13, 2019

MIT Creates World’s First Psychopath AI By Feeding It Reddit Violent Content

MIT Creates World's First Psychopath AI By Feeding It Reddit Violent ContentNavin Bondade
www.techgrabyte.com
Originally posted October 2019

The state of the psychopathic is wider and darker in human intelligence that we haven’t fully understood yet, but still, scientists have given a try and to implement Psychopathism in Artificial Intelligence.

Scientists at MIT have created the world’s First Psychopath AI called Norman. The purpose of Norman AI is to demonstrate that AI cannot be unfair and biased unless such data is fed into it.

MIT’s Scientists have created Norman by training it on violent and gruesome content like images of people dying in gruesome circumstances from an unnamed Reddit page before showing it a series of Rorschach inkblot tests.

The Scientists created a dataset from this unnamed Reddit page and trained Norman to perform image captioning. This data is dedicated to documents and observe the disturbing reality of death.

The info is here.

Monday, July 29, 2019

AI Ethics – Too Principled to Fail?

Brent Mittelstadt
Oxford Internet Institute
https://ssrn.com/abstract=3391293

Abstract

AI Ethics is now a global topic of discussion in academic and policy circles. At least 63 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.

The paper is here.

Shift from professional ethics to business ethics

The outputs of many AI Ethics initiatives resemble professional codes of ethics that address design requirements and the behaviours and values of individual professions.  The legitimacy of particular applications and their underlying business interests remain largely unquestioned.  This approach conveniently steers debate towards the transgressions of unethical individuals, and away from the collective failure of unethical businesses and business models.  Developers will always be constrained by the institutions that employ them. To be truly effective, the ethical challenges of AI cannot conceptualised as individual failures. Going forward, AI Ethics must become an ethics of AI businesses as well.

Saturday, March 30, 2019

AI Safety Needs Social Scientists

Geoffrey Irving and Amanda Askell
distill.pub
Originally published February 19, 2019

Here is an excerpt:

Learning values by asking humans questions

We start with the premise that human values are too complex to describe with simple rules. By “human values” we mean our full set of detailed preferences, not general goals such as “happiness” or “loyalty”. One source of complexity is that values are entangled with a large number of facts about the world, and we cannot cleanly separate facts from values when building ML models. For example, a rule that refers to “gender” would require an ML model that accurately recognizes this concept, but Buolamwini and Gebru found that several commercial gender classifiers with a 1% error rate on white men failed to recognize black women up to 34% of the time. Even where people have correct intuition about values, we may be unable to specify precise rules behind these intuitions. Finally, our values may vary across cultures, legal systems, or situations: no learned model of human values will be universally applicable.

If humans can’t reliably report the reasoning behind their intuitions about values, perhaps we can make value judgements in specific cases. To realize this approach in an ML context, we ask humans a large number of questions about whether an action or outcome is better or worse, then train on this data. “Better or worse” will include both factual and value-laden components: for an AI system trained to say things, “better” statements might include “rain falls from clouds”, “rain is good for plants”, “many people dislike rain”, etc. If the training works, the resulting ML system will be able to replicate human judgement about particular situations, and thus have the same “fuzzy access to approximate rules” about values as humans. We also train the ML system to come up with proposed actions, so that it knows both how to perform a task and how to judge its performance. This approach works at least in simple cases, such as Atari games and simple robotics tasks and language-specified goals in gridworlds. The questions we ask change as the system learns to perform different types of actions, which is necessary as the model of what is better or worse will only be accurate if we have applicable data to generalize from.

The info is here.

Saturday, March 9, 2019

Can AI Help Reduce Disparities in General Medical and Mental Health Care?

Irene Y. Chen, Peter Szolovits, and Marzyeh Ghassemi
AMA J Ethics. 2019;21(2):E167-179.
doi: 10.1001/amajethics.2019.167.

Abstract

Background: As machine learning becomes increasingly common in health care applications, concerns have been raised about bias in these systems’ data, algorithms, and recommendations. Simply put, as health care improves for some, it might not improve for all.

Methods: Two case studies are examined using a machine learning algorithm on unstructured clinical and psychiatric notes to predict intensive care unit (ICU) mortality and 30-day psychiatric readmission with respect to race, gender, and insurance payer type as a proxy for socioeconomic status.

Results: Clinical note topics and psychiatric note topics were heterogenous with respect to race, gender, and insurance payer type, which reflects known clinical findings. Differences in prediction accuracy and therefore machine bias are shown with respect to gender and insurance type for ICU mortality and with respect to insurance policy for psychiatric 30-day readmission.

Conclusions: This analysis can provide a framework for assessing and identifying disparate impacts of artificial intelligence in health care.

Wednesday, January 2, 2019

The Intuitive Appeal of Explainable Machines

Andrew D. Selbst & Solon Barocas
Fordham Law Review -Volume 87

Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.

Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible.


In most cases, intuition serves as the unacknowledged bridge between a descriptive account to a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself.

The info is here.