Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Machine Learning. Show all posts
Showing posts with label Machine Learning. Show all posts

Tuesday, April 2, 2024

The Puzzle of Evaluating Moral Cognition in Artificial Agents

Reinecke, M. G., Mao, Y., et al. (2023).
Cognitive Science, 47(8).

Abstract

In developing artificial intelligence (AI), researchers often benchmark against human performance as a measure of progress. Is this kind of comparison possible for moral cognition? Given that human moral judgment often hinges on intangible properties like “intention” which may have no natural analog in artificial agents, it may prove difficult to design a “like-for-like” comparison between the moral behavior of artificial and human agents. What would a measure of moral behavior for both humans and AI look like? We unravel the complexity of this question by discussing examples within reinforcement learning and generative AI, and we examine how the puzzle of evaluating artificial agents' moral cognition remains open for further investigation within cognitive science.

The link to the article is the hyperlink above.

Here is my summary:

This article delves into the challenges associated with assessing the moral decision-making capabilities of artificial intelligence systems. It explores the complexities of imbuing AI with ethical reasoning and the difficulties in evaluating their moral cognition. The article discusses the need for robust frameworks and methodologies to effectively gauge the ethical behavior of AI, highlighting the intricate nature of integrating morality into machine learning algorithms. Overall, it emphasizes the critical importance of developing reliable methods to evaluate the moral reasoning of artificial agents in order to ensure their responsible and ethical deployment in various domains.

Friday, February 10, 2023

Individual differences in (dis)honesty are represented in the brain's functional connectivity at rest

Speer, S. P., Smidts, A., & Boksem, M. A. (2022).
NeuroImage, 246, 118761.
https://doi.org/10.1016/j.neuroimage.2021.118761

Abstract

Measurement of the determinants of socially undesirable behaviors, such as dishonesty, are complicated and obscured by social desirability biases. To circumvent these biases, we used connectome-based predictive modeling (CPM) on resting state functional connectivity patterns in combination with a novel task which inconspicuously measures voluntary cheating to gain access to the neurocognitive determinants of (dis)honesty. Specifically, we investigated whether task-independent neural patterns within the brain at rest could be used to predict a propensity for (dis)honest behavior. Our analyses revealed that functional connectivity, especially between brain networks linked to self-referential thinking (vmPFC, temporal poles, and PCC) and reward processing (caudate nucleus), reliably correlates, in an independent sample, with participants’ propensity to cheat. Participants who cheated the most also scored highest on several self-report measures of impulsivity which underscores the generalizability of our results. Notably, when comparing neural and self-report measures, the neural measures were found to be more important in predicting cheating propensity.

Significance statement

Dishonesty pervades all aspects of life and causes enormous economic losses. However, because the underlying mechanisms of socially undesirable behaviors are difficult to measure, the neurocognitive determinants of individual differences in dishonesty largely remain unknown. Here, we apply machine-learning methods to stable patterns of neural connectivity to investigate how dispositions toward (dis)honesty, measured by an innovative behavioral task, are encoded in the brain. We found that stronger connectivity between brain regions associated with self-referential thinking and reward are predictive of the propensity to be honest. The high predictive accuracy of our machine-learning models, combined with the reliable nature of resting-state functional connectivity, which is uncontaminated by the social-desirability biases to which self-report measures are susceptible, provides an excellent avenue for the development of useful neuroimaging-based biomarkers of socially undesirable behaviors.

Discussion

Employing connectome-based predictive modeling (CPM) in combination with the innovative Spot-The-Differences task, which allows for inconspicuously measuring cheating, we identified a functional connectome that reliably predicts a disposition toward (dis)honesty in an independent sample. We observed a Pearson correlation between out-of-sample predicted and actual cheatcount (r = 0.40) that resides on the higher side of the typical range of correlations (between r = 0.2 and r = 0.5) reported in previous studies employing CPM (Shen et al., 2017). Thus, functional connectivity within the brain at rest predicts whether someone is more honest or more inclined to cheat in our task.

In light of previous research on moral decisions, the regions we identified in our resting state analysis can be associated with two networks frequently found to be involved in moral decision making. First, the vmPFC, the bilateral temporal poles and the PCC have consistently been associated with self-referential thinking. For example, it has been found that functional connectivity between these areas during rest is associated with higher-level metacognitive operations such as self-reflection, introspection and self-awareness (Gusnard et al., 2001; Meffert et al., 2013; Northoff et al., 2006; Vanhaudenhuyse et al., 2011). Secondly, the caudate nucleus, which has been found to be involved in anticipation and valuation of rewards (Ballard and Knutson, 2009; Knutson et al., 2001) can be considered an important node in the reward network (Bartra et al., 2013). Participants with higher levels of activation in the reward network, in anticipation of rewards, have previously been found to indeed be more dishonest (Abe and Greene, 2014).

Thursday, April 7, 2022

How to Prevent Robotic Sociopaths: A Neuroscience Approach to Artificial Ethics

Christov-Moore, L., Reggente, N.,  et al.
https://doi.org/10.31234/osf.io/6tn42

Abstract

Artificial intelligence (AI) is expanding into every niche of human life, organizing our activity, expanding our agency and interacting with us to an increasing extent. At the same time, AI’s efficiency, complexity and refinement are growing quickly. Justifiably, there is increasing concern with the immediate problem of engineering AI that is aligned with human interests.

Computational approaches to the alignment problem attempt to design AI systems to parameterize human values like harm and flourishing, and avoid overly drastic solutions, even if these are seemingly optimal. In parallel, ongoing work in service AI (caregiving, consumer care, etc.) is concerned with developing artificial empathy, teaching AI’s to decode human feelings and behavior, and evince appropriate, empathetic responses. This could be equated to cognitive empathy in humans.

We propose that in the absence of affective empathy (which allows us to share in the states of others), existing approaches to artificial empathy may fail to produce the caring, prosocial component of empathy, potentially resulting in superintelligent, sociopath-like AI. We adopt the colloquial usage of “sociopath” to signify an intelligence possessing cognitive empathy (i.e., the ability to infer and model the internal states of others), but crucially lacking harm aversion and empathic concern arising from vulnerability, embodiment, and affective empathy (which permits for shared experience). An expanding, ubiquitous intelligence that does not have a means to care about us poses a species-level risk.

It is widely acknowledged that harm aversion is a foundation of moral behavior. However, harm aversion is itself predicated on the experience of harm, within the context of the preservation of physical integrity. Following from this, we argue that a “top-down” rule-based approach to achieving caring, aligned AI may be unable to anticipate and adapt to the inevitable novel moral/logistical dilemmas faced by an expanding AI. It may be more effective to cultivate prosociality from the bottom up, baked into an embodied, vulnerable artificial intelligence with an incentive to preserve its real or simulated physical integrity. This may be achieved via optimization for incentives and contingencies inspired by the development of empathic concern in vivo. We outline the broad prerequisites of this approach and review ongoing work that is consistent with our rationale.

If successful, work of this kind could allow for AI that surpasses empathic fatigue and the idiosyncrasies, biases, and computational limits of human empathy. The scaleable complexity of AI may allow it unprecedented capability to deal proportionately and compassionately with complex, large-scale ethical dilemmas. By addressing this problem seriously in the early stages of AI’s integration with society, we might eventually produce an AI that plans and behaves with an ingrained regard for the welfare of others, aided by the scalable cognitive complexity necessary to model and solve extraordinary problems.

Tuesday, March 9, 2021

How social learning amplifies moral outrage expression in online social networks

Brady, W. J., McLoughlin, K. L., et al.
(2021, January 19).
https://doi.org/10.31234/osf.io/gf7t5

Abstract

Moral outrage shapes fundamental aspects of human social life and is now widespread in online social networks. Here, we show how social learning processes amplify online moral outrage expressions over time. In two pre-registered observational studies of Twitter (7,331 users and 12.7 million total tweets) and two pre-registered behavioral experiments (N = 240), we find that positive social feedback for outrage expressions increases the likelihood of future outrage expressions, consistent with principles of reinforcement learning. We also find that outrage expressions are sensitive to expressive norms in users’ social networks, over and above users’ own preferences, suggesting that norm learning processes guide online outrage expressions. Moreover, expressive norms moderate social reinforcement of outrage: in ideologically extreme networks, where outrage expression is more common, users are less sensitive to social feedback when deciding whether to express outrage. Our findings highlight how platform design interacts with human learning mechanisms to impact moral discourse in digital public spaces.

From the Conclusion

At first blush, documenting the role of reinforcement learning in online outrage expressions may seem trivial. Of course, we should expect that a fundamental principle of human behavior, extensively observed in offline settings, will similarly describe behavior in online settings. However, reinforcement learning of moral behaviors online, combined with the design of social media platforms, may have especially important social implications. Social media newsfeed algorithms can directly impact how much social feedback a given post receives by determining how many other users are exposed to that post. Because we show here that social feedback impacts users’ outrage expressions over time, this suggests newsfeed algorithms can influence users’ moral behaviors by exploiting their natural tendencies for reinforcement learning.  In this way, reinforcement learning on social media differs from reinforcement learning in other environments because crucial inputs to the learning process are shaped by corporate interests. Even if platform designers do not intend to amplify moral outrage, design choices aimed at satisfying other goals --such as profit maximization via user engagement --can indirectly impact moral behavior because outrage-provoking content draws high engagement. Given that moral outrage plays a critical role in collective action and social change, our data suggest that platform designers have the ability to influence the success or failure of social and political movements, as well as informational campaigns designed to influence users’ moral and political attitudes. Future research is required to understand whether users are aware of this, and whether making such knowledge salient can impact their online behavior.


People are more likely to express online "moral outrage" if they have either been rewarded for it in the past or it's common in their own social network.  They are even willing to express far more moral outrage than they genuinely feel in order to fit in.

Tuesday, December 1, 2020

Using Machine Learning to Generate Novel Hypotheses: Increasing Optimism About COVID-19 Makes People Less Willing to Justify Unethical Behaviors

Sheetal A, Feng Z, Savani K. 
Psychological Science. 2020;31(10):
1222-1235. 
doi:10.1177/0956797620959594

Abstract

How can we nudge people to not engage in unethical behaviors, such as hoarding and violating social-distancing guidelines, during the COVID-19 pandemic? Because past research on antecedents of unethical behavior has not provided a clear answer, we turned to machine learning to generate novel hypotheses. We trained a deep-learning model to predict whether or not World Values Survey respondents perceived unethical behaviors as justifiable, on the basis of their responses to 708 other items. The model identified optimism about the future of humanity as one of the top predictors of unethicality. A preregistered correlational study (N = 218 U.S. residents) conceptually replicated this finding. A preregistered experiment (N = 294 U.S. residents) provided causal support: Participants who read a scenario conveying optimism about the COVID-19 pandemic were less willing to justify hoarding and violating social-distancing guidelines than participants who read a scenario conveying pessimism. The findings suggest that optimism can help reduce unethicality, and they document the utility of machine-learning methods for generating novel hypotheses.

Here is how the research article begins:

Unethical behaviors can have substantial consequences in times of crisis. For example, in the midst of the COVID-19 pandemic, many people hoarded face masks and hand sanitizers; this hoarding deprived those who needed protective supplies most (e.g., medical workers and the elderly) and, therefore, put them at risk. Despite escalating deaths, more than 50,000 people were caught violating quarantine orders in Italy, putting themselves and others at risk. Governments covered up the scale of the pandemic in that country, thereby allowing the infection to spread in an uncontrolled manner. Thus, understanding antecedents of unethical behavior and identifying nudges to reduce unethical behaviors are particularly important in times of crisis.

Here is part of the Discussion

We formulated a novel hypothesis—that optimism reduces unethicality—on the basis of the deep-learning model’s finding that whether people think that the future of humanity is bleak or bright is a strong predictor of unethicality. This variable was not flagged as a top predictor either by the correlational analysis or by the lasso regression. Consistent with this idea, the results of a correlational study showed that people higher on dispositional optimism were less willing to engage in unethical behaviors. A following experiment found that increasing participants’ optimism about the COVID-19 epidemic reduced the extent to which they justified unethical behaviors related to the epidemic. The behavioral studies were conducted with U.S. American participants; thus, the cultural generalizability of the present findings is unclear. Future research needs to test whether optimism reduces unethical behavior in other cultural contexts.

Tuesday, October 13, 2020

Machine learning uncovers the most robust self-report predictors of relationship quality across 43 longitudinal couples studies

S. Joel and others
Proceedings of the National Academy of Sciences 
Aug 2020, 117 (32) 19061-19071
DOI: 10.1073/pnas.1917036117

Abstract

Given the powerful implications of relationship quality for health and well-being, a central mission of relationship science is explaining why some romantic relationships thrive more than others. This large-scale project used machine learning (i.e., Random Forests) to 1) quantify the extent to which relationship quality is predictable and 2) identify which constructs reliably predict relationship quality. Across 43 dyadic longitudinal datasets from 29 laboratories, the top relationship-specific predictors of relationship quality were perceived-partner commitment, appreciation, sexual satisfaction, perceived-partner satisfaction, and conflict. The top individual-difference predictors were life satisfaction, negative affect, depression, attachment avoidance, and attachment anxiety. Overall, relationship-specific variables predicted up to 45% of variance at baseline, and up to 18% of variance at the end of each study. Individual differences also performed well (21% and 12%, respectively). Actor-reported variables (i.e., own relationship-specific and individual-difference variables) predicted two to four times more variance than partner-reported variables (i.e., the partner’s ratings on those variables). Importantly, individual differences and partner reports had no predictive effects beyond actor-reported relationship-specific variables alone. These findings imply that the sum of all individual differences and partner experiences exert their influence on relationship quality via a person’s own relationship-specific experiences, and effects due to moderation by individual differences and moderation by partner-reports may be quite small. Finally, relationship-quality change (i.e., increases or decreases in relationship quality over the course of a study) was largely unpredictable from any combination of self-report variables. This collective effort should guide future models of relationships.

Significance

What predicts how happy people are with their romantic relationships? Relationship science—an interdisciplinary field spanning psychology, sociology, economics, family studies, and communication—has identified hundreds of variables that purportedly shape romantic relationship quality. The current project used machine learning to directly quantify and compare the predictive power of many such variables among 11,196 romantic couples. People’s own judgments about the relationship itself—such as how satisfied and committed they perceived their partners to be, and how appreciative they felt toward their partners—explained approximately 45% of their current satisfaction. The partner’s judgments did not add information, nor did either person’s personalities or traits. Furthermore, none of these variables could predict whose relationship quality would increase versus decrease over time.

Tuesday, February 11, 2020

How to build ethical AI

Carolyn Herzog
thehill.com
Originally posted 18 Jan 20

Here is an excerpt:

Any standard-setting in this field must be rooted in the understanding that data is the lifeblood of AI. The continual input of information is what fuels machine learning, and the most powerful AI tools require massive amounts of it. This of course raises issues of how that data is being collected, how it is being used, and how it is being safeguarded.

One of the most difficult questions we must address is how to overcome bias, particularly the unintentional kind. Let’s consider one potential application for AI: criminal justice. By removing prejudices that contribute to racial and demographic disparities, we can create systems that produce more uniform sentencing standards. Yet, programming such a system still requires weighting countless factors to determine appropriate outcomes. It is a human who must program the AI, and a person’s worldview will shape how they program machines to learn. That’s just one reason why enterprises developing AI must consider workforce diversity and put in place best practices and control for both intentional and inherent bias.

This leads back to transparency.

A computer can make a highly complex decision in an instant, but will we have confidence that it’s making a just one?

Whether a machine is determining a jail sentence, or approving a loan, or deciding who is admitted to a college, how do we explain how those choices were made? And how do we make sure the factors that went into that algorithm are understandable for the average person?

The info is here.

Sunday, January 19, 2020

A Right to a Human Decision

Aziz Z. Huq
Virginia Law Review, Vol. 105
U of Chicago, Public Law Working Paper No. 713

Abstract

Recent advances in computational technologies have spurred anxiety about a shift of power from human to machine decision-makers. From prison sentences to loan approvals to college applications, corporate and state actors increasingly lean on machine learning tools (a subset of artificial intelligence) to allocate goods and to assign coercion. Machine-learning tools are perceived to be eclipsing, even extinguishing, human agency in ways that sacrifice important individual interests. An emerging legal response to such worries is a right to a human decision. European law has already embraced the idea in the General Data Protection Regulation. American law, especially in the criminal justice domain, is already moving in the same direction. But no jurisdiction has defined with precision what that right entails, or furnished a clear justification for its creation.


This Article investigates the legal possibilities of a right to a human decision. I first define the conditions of technological plausibility for that right as applied against state action. To understand its technological predicates, I specify the margins along which machine decisions are distinct from human ones. Such technological contextualization enables a nuanced exploration of why, or indeed whether, the gaps that do separate human and machine decisions might have normative import. Based on this technological accounting, I then analyze the normative stakes of a right to a human decision. I consider three potential normative justifications: (a) an appeal to individual interests to participation and reason-giving; (b) worries about the insufficiently reasoned or individuated quality of state action; and (c) arguments based on negative externalities. A careful analysis of these three grounds suggests that there is no general justification for adopting a right to a human decision by the state. Normative concerns about insufficiently reasoned or accurate decisions, which have a particularly powerful hold on the legal imagination, are best addressed in other ways. Similarly, concerns about the ways that algorithmic tools create asymmetries of social power are not parried by a right to a human decision. Indeed, rather than firmly supporting a right to a human decision, available evidence tentatively points toward a countervailing ‘right to a well-calibrated machine decision’ as ultimately more normatively well- grounded.

The paper can be downloaded here.

Thursday, January 16, 2020

Ethics In AI: Why Values For Data Matter

Ethics in AIMarc Teerlink
forbes.com
Originally posted 18 Dec 19

Here is an excerpt:

Data Is an Asset, and It Must Have Values

Already, 22% of U.S. companies have attributed part of their profits to AI and advanced cases of (AI infused) predictive analytics.

According to a recent study SAP conducted in conjunction with the Economist’s Intelligent Unit, organizations doing the most with machine learning have experienced 43% more growth on average versus those who aren’t using AI and ML at all — or not using AI well.

One of their secrets: They treat data as an asset. The same way organizations treat inventory, fleet, and manufacturing assets.

They start with clear data governance with executive ownership and accountability (for a concrete example of how this looks, here are some principles and governance models that we at SAP apply in our daily work).

So, do treat data as an asset, because, no matter how powerful the algorithm, poor training data will limit the effectiveness of Artificial Intelligence and Predictive Analytics.

The info is here.

Tuesday, January 7, 2020

AI Is Not Similar To Human Intelligence. Thinking So Could Be Dangerous

Elizabeth Fernandez
Artificial intelligenceforbes.com
Originally posted 30 Nov 19

Here is an excerpt:

No doubt, these algorithms are powerful, but to think that they “think” and “learn” in the same way as humans would be incorrect, Watson says. There are many differences, and he outlines three.

The first - DNNs are easy to fool. For example, imagine you have a picture of a banana. A neural network successfully classifies it as a banana. But it’s possible to create a generative adversarial network that can fool your DNN. By adding a slight amount of noise or another image besides the banana, your DNN might now think the picture of a banana is a toaster. A human could not be fooled by such a trick. Some argue that this is because DNNs can see things humans can’t, but Watson says, “This disconnect between biological and artificial neural networks suggests that the latter lack some crucial component essential to navigating the real world.”

Secondly, DNNs need an enormous amount of data to learn. An image classification DNN might need to “see” thousands of pictures of zebras to identify a zebra in an image. Give the same test to a toddler, and chances are s/he could identify a zebra, even one that’s partially obscured, by only seeing a picture of a zebra a few times. Humans are great “one-shot learners,” says Watson. Teaching a neural network, on the other hand, might be very difficult, especially in instances where data is hard to come by.

Thirdly, neural nets are “myopic”. They can see the trees, so to speak, but not the forest. For example, a DNN could successfully label a picture of Kim Kardashian as a woman, an entertainer, and a starlet. However, switching the position of her mouth and one of her eyes actually improved the confidence of the DNN’s prediction. The DNN didn’t see anything wrong with that image. Obviously, something is wrong here. Another example - a human can say “that cloud looks like a dog”, whereas a DNN would say that the cloud is a dog.

The info is here.

Wednesday, November 13, 2019

MIT Creates World’s First Psychopath AI By Feeding It Reddit Violent Content

MIT Creates World's First Psychopath AI By Feeding It Reddit Violent ContentNavin Bondade
www.techgrabyte.com
Originally posted October 2019

The state of the psychopathic is wider and darker in human intelligence that we haven’t fully understood yet, but still, scientists have given a try and to implement Psychopathism in Artificial Intelligence.

Scientists at MIT have created the world’s First Psychopath AI called Norman. The purpose of Norman AI is to demonstrate that AI cannot be unfair and biased unless such data is fed into it.

MIT’s Scientists have created Norman by training it on violent and gruesome content like images of people dying in gruesome circumstances from an unnamed Reddit page before showing it a series of Rorschach inkblot tests.

The Scientists created a dataset from this unnamed Reddit page and trained Norman to perform image captioning. This data is dedicated to documents and observe the disturbing reality of death.

The info is here.

Monday, July 29, 2019

AI Ethics – Too Principled to Fail?

Brent Mittelstadt
Oxford Internet Institute
https://ssrn.com/abstract=3391293

Abstract

AI Ethics is now a global topic of discussion in academic and policy circles. At least 63 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.

The paper is here.

Shift from professional ethics to business ethics

The outputs of many AI Ethics initiatives resemble professional codes of ethics that address design requirements and the behaviours and values of individual professions.  The legitimacy of particular applications and their underlying business interests remain largely unquestioned.  This approach conveniently steers debate towards the transgressions of unethical individuals, and away from the collective failure of unethical businesses and business models.  Developers will always be constrained by the institutions that employ them. To be truly effective, the ethical challenges of AI cannot conceptualised as individual failures. Going forward, AI Ethics must become an ethics of AI businesses as well.

Saturday, March 30, 2019

AI Safety Needs Social Scientists

Geoffrey Irving and Amanda Askell
distill.pub
Originally published February 19, 2019

Here is an excerpt:

Learning values by asking humans questions

We start with the premise that human values are too complex to describe with simple rules. By “human values” we mean our full set of detailed preferences, not general goals such as “happiness” or “loyalty”. One source of complexity is that values are entangled with a large number of facts about the world, and we cannot cleanly separate facts from values when building ML models. For example, a rule that refers to “gender” would require an ML model that accurately recognizes this concept, but Buolamwini and Gebru found that several commercial gender classifiers with a 1% error rate on white men failed to recognize black women up to 34% of the time. Even where people have correct intuition about values, we may be unable to specify precise rules behind these intuitions. Finally, our values may vary across cultures, legal systems, or situations: no learned model of human values will be universally applicable.

If humans can’t reliably report the reasoning behind their intuitions about values, perhaps we can make value judgements in specific cases. To realize this approach in an ML context, we ask humans a large number of questions about whether an action or outcome is better or worse, then train on this data. “Better or worse” will include both factual and value-laden components: for an AI system trained to say things, “better” statements might include “rain falls from clouds”, “rain is good for plants”, “many people dislike rain”, etc. If the training works, the resulting ML system will be able to replicate human judgement about particular situations, and thus have the same “fuzzy access to approximate rules” about values as humans. We also train the ML system to come up with proposed actions, so that it knows both how to perform a task and how to judge its performance. This approach works at least in simple cases, such as Atari games and simple robotics tasks and language-specified goals in gridworlds. The questions we ask change as the system learns to perform different types of actions, which is necessary as the model of what is better or worse will only be accurate if we have applicable data to generalize from.

The info is here.

Saturday, March 9, 2019

Can AI Help Reduce Disparities in General Medical and Mental Health Care?

Irene Y. Chen, Peter Szolovits, and Marzyeh Ghassemi
AMA J Ethics. 2019;21(2):E167-179.
doi: 10.1001/amajethics.2019.167.

Abstract

Background: As machine learning becomes increasingly common in health care applications, concerns have been raised about bias in these systems’ data, algorithms, and recommendations. Simply put, as health care improves for some, it might not improve for all.

Methods: Two case studies are examined using a machine learning algorithm on unstructured clinical and psychiatric notes to predict intensive care unit (ICU) mortality and 30-day psychiatric readmission with respect to race, gender, and insurance payer type as a proxy for socioeconomic status.

Results: Clinical note topics and psychiatric note topics were heterogenous with respect to race, gender, and insurance payer type, which reflects known clinical findings. Differences in prediction accuracy and therefore machine bias are shown with respect to gender and insurance type for ICU mortality and with respect to insurance policy for psychiatric 30-day readmission.

Conclusions: This analysis can provide a framework for assessing and identifying disparate impacts of artificial intelligence in health care.

Wednesday, January 2, 2019

The Intuitive Appeal of Explainable Machines

Andrew D. Selbst & Solon Barocas
Fordham Law Review -Volume 87

Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.

Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible.


In most cases, intuition serves as the unacknowledged bridge between a descriptive account to a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself.

The info is here.

Tuesday, December 11, 2018

Is It Ethical to Use Prognostic Estimates from Machine Learning to Treat Psychosis?

Nicole Martinez-Martin, Laura B. Dunn, and Laura Weiss Roberts
AMA J Ethics. 2018;20(9):E804-811.
doi: 10.1001/amajethics.2018.804.

Abstract

Machine learning is a method for predicting clinically relevant variables, such as opportunities for early intervention, potential treatment response, prognosis, and health outcomes. This commentary examines the following ethical questions about machine learning in a case of a patient with new onset psychosis: (1) When is clinical innovation ethically acceptable? (2) How should clinicians communicate with patients about the ethical issues raised by a machine learning predictive model?

(cut)

Conclusion

In order to implement the predictive tool in an ethical manner, Dr K will need to carefully consider how to give appropriate information—in an understandable manner—to patients and families regarding use of the predictive model. In order to maximize benefits from the predictive model and minimize risks, Dr K and the institution as a whole will need to formulate ethically appropriate procedures and protocols surrounding the instrument. For example, implementation of the predictive tool should consider the ability of a physician to override the predictive model in support of ethically or clinically important variables or values, such as beneficence. Such measures could help realize the clinical application potential of machine learning tools, such as this psychosis prediction model, to improve the lives of patients.

Wednesday, August 29, 2018

The ethics of computer science: this researcher has a controversial proposal

Elizabeth Gibney
www.nature.com
Originally published July 26, 2018

In the midst of growing public concern over artificial intelligence (AI), privacy and the use of data, Brent Hecht has a controversial proposal: the computer-science community should change its peer-review process to ensure that researchers disclose any possible negative societal consequences of their work in papers, or risk rejection.

Hecht, a computer scientist, chairs the Future of Computing Academy (FCA), a group of young leaders in the field that pitched the policy in March. Without such measures, he says, computer scientists will blindly develop products without considering their impacts, and the field risks joining oil and tobacco as industries whose researchers history judges unfavourably.

The FCA is part of the Association for Computing Machinery (ACM) in New York City, the world’s largest scientific-computing society. It, too, is making changes to encourage researchers to consider societal impacts: on 17 July, it published an updated version of its ethics code, last redrafted in 1992. The guidelines call on researchers to be alert to how their work can influence society, take steps to protect privacy and continually reassess technologies whose impact will change over time, such as those based in machine learning.

The rest is here.

Wednesday, August 8, 2018

Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates

Marshall Allen
ProPublica.org
Originally posted July 17, 2018

Here are two excerpts:

With little public scrutiny, the health insurance industry has joined forces with data brokers to vacuum up personal details about hundreds of millions of Americans, including, odds are, many readers of this story. The companies are tracking your race, education level, TV habits, marital status, net worth. They’re collecting what you post on social media, whether you’re behind on your bills, what you order online. Then they feed this information into complicated computer algorithms that spit out predictions about how much your health care could cost them.

(cut)

At a time when every week brings a new privacy scandal and worries abound about the misuse of personal information, patient advocates and privacy scholars say the insurance industry’s data gathering runs counter to its touted, and federally required, allegiance to patients’ medical privacy. The Health Insurance Portability and Accountability Act, or HIPAA, only protects medical information.

“We have a health privacy machine that’s in crisis,” said Frank Pasquale, a professor at the University of Maryland Carey School of Law who specializes in issues related to machine learning and algorithms. “We have a law that only covers one source of health information. They are rapidly developing another source.”

The information is here.

Wednesday, June 20, 2018

How the Enlightenment Ends

Henry A. Kissinger
The Atlantic
Posted in the June 2018 Issue

Here are two excerpts:

Second, that in achieving intended goals, AI may change human thought processes and human values. AlphaGo defeated the world Go champions by making strategically unprecedented moves—moves that humans had not conceived and have not yet successfully learned to overcome. Are these moves beyond the capacity of the human brain? Or could humans learn them now that they have been demonstrated by a new master?

(cut)

Other AI projects work on modifying human thought by developing devices capable of generating a range of answers to human queries. Beyond factual questions (“What is the temperature outside?”), questions about the nature of reality or the meaning of life raise deeper issues. Do we want children to learn values through discourse with untethered algorithms? Should we protect privacy by restricting AI’s learning about its questioners? If so, how do we accomplish these goals?

If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do. It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?

The article is here.

Sunday, May 13, 2018

Facebook Uses AI To Predict Your Future Actions for Advertizers

Sam Biddle
The Intercept
Originally posted April 13, 2018

Here is an excerpt:

Asked by Fortune’s Stacey Higginbotham where Facebook hoped its machine learning work would take it in five years, Chief Technology Officer Mike Schroepfer said in 2016 his goal was that AI “makes every moment you spend on the content and the people you want to spend it with.” Using this technology for advertising was left unmentioned. A 2017 TechCrunch article declared, “Machine intelligence is the future of monetization for Facebook,” but quoted Facebook executives in only the mushiest ways: “We want to understand whether you’re interested in a certain thing generally or always. Certain things people do cyclically or weekly or at a specific time, and it’s helpful to know how this ebbs and flows,” said Mark Rabkin, Facebook’s vice president of engineering for ads. The company was also vague about the melding of machine learning to ads in a 2017 Wired article about the company’s AI efforts, which alluded to efforts “to show more relevant ads” using machine learning and anticipate what ads consumers are most likely to click on, a well-established use of artificial intelligence. Most recently, during his congressional testimony, Zuckerberg touted artificial intelligence as a tool for curbing hate speech and terrorism.

The article is here.