Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Algorithms. Show all posts
Showing posts with label Algorithms. Show all posts

Wednesday, April 17, 2019

Warnings of a Dark Side to A.I. in Health Care

Cade Metz and Craig S. Smith
The New York Times
Originally published March 21, 2019

Here is an excerpt:

An adversarial attack exploits a fundamental aspect of the way many A.I. systems are designed and built. Increasingly, A.I. is driven by neural networks, complex mathematical systems that learn tasks largely on their own by analyzing vast amounts of data.

By analyzing thousands of eye scans, for instance, a neural network can learn to detect signs of diabetic blindness. This “machine learning” happens on such an enormous scale — human behavior is defined by countless disparate pieces of data — that it can produce unexpected behavior of its own.

In 2016, a team at Carnegie Mellon used patterns printed on eyeglass frames to fool face-recognition systems into thinking the wearers were celebrities. When the researchers wore the frames, the systems mistook them for famous people, including Milla Jovovich and John Malkovich.

A group of Chinese researchers pulled a similar trick by projecting infrared light from the underside of a hat brim onto the face of whoever wore the hat. The light was invisible to the wearer, but it could trick a face-recognition system into thinking the wearer was, say, the musician Moby, who is Caucasian, rather than an Asian scientist.

Researchers have also warned that adversarial attacks could fool self-driving cars into seeing things that are not there. By making small changes to street signs, they have duped cars into detecting a yield sign instead of a stop sign.

The info is here.

Monday, March 25, 2019

Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability

Alex John London
The Hastings Center Report
Volume49, Issue1, January/February 2019, Pages 15-21

Abstract

Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or a rationale for particular decisions in individual cases, some commentators regard ceding medical decision‐making to black box systems as contravening the profound moral responsibilities of clinicians. I argue, however, that opaque decisions are more common in medicine than critics realize. Moreover, as Aristotle noted over two millennia ago, when our knowledge of causal systems is incomplete and precarious—as it often is in medicine—the ability to explain how results are produced can be less important than the ability to produce such results and empirically verify their accuracy.

The info is here.

Monday, March 18, 2019

OpenAI's Realistic Text-Generating AI Triggers Ethics Concerns

William Falcon
Forbes.com
Originally posted February 18, 2019

Here is an excerpt:

Why you should care.

GPT-2 is the closest AI we have to make conversational AI a possibility. Although conversational AI is far from solved, chatbots powered by this technology could help doctors scale advice over chats, scale advice for potential suicide victims, improve translation systems, and improve speech recognition across applications.

Although OpenAI acknowledges these potential benefits, it also acknowledges the potential risks of releasing the technology. Misuse could include, impersonate others online, generate misleading news headlines, or automate the automation of fake posts to social media.

But I argue these malicious applications are already possible without this AI. There exist other public models which can already be used for these purposes. Thus, I think not releasing this code is more harmful to the community because A) it sets a bad precedent for open research, B) keeps companies from improving their services, C) unnecessarily hypes these results and D) may trigger unnecessary fears about AI in the general public.

The info is here.

Saturday, March 16, 2019

How Should AI Be Developed, Validated, and Implemented in Patient Care?

Michael Anderson and Susan Leigh Anderson
AMA J Ethics. 2019;21(2):E125-130.
doi: 10.1001/amajethics.2019.125.

Abstract

Should an artificial intelligence (AI) program that appears to have a better success rate than human pathologists be used to replace or augment humans in detecting cancer cells? We argue that some concerns—the “black-box” problem (ie, the unknowability of how output is derived from input) and automation bias (overreliance on clinical decision support systems)—are not significant from a patient’s perspective but that expertise in AI is required to properly evaluate test results.

Here is an excerpt:

Automation bias. Automation bias refers generally to a kind of complacency that sets in when a job once done by a health care professional is transferred to an AI program. We see nothing ethically or clinically wrong with automation, if the program achieves a virtually 100% success rate. If, however, the success rate is lower than that—92%, as in the case presented—it’s important that we have assurances that the program has quality input; in this case, that probably means that the AI program “learned” from a cross section of female patients of diverse ages and races. With diversity of input secured, what matters most, ethically and clinically, is that that the AI program has a higher cancer cell-detection success rate than human pathologists.

Saturday, March 9, 2019

Can AI Help Reduce Disparities in General Medical and Mental Health Care?

Irene Y. Chen, Peter Szolovits, and Marzyeh Ghassemi
AMA J Ethics. 2019;21(2):E167-179.
doi: 10.1001/amajethics.2019.167.

Abstract

Background: As machine learning becomes increasingly common in health care applications, concerns have been raised about bias in these systems’ data, algorithms, and recommendations. Simply put, as health care improves for some, it might not improve for all.

Methods: Two case studies are examined using a machine learning algorithm on unstructured clinical and psychiatric notes to predict intensive care unit (ICU) mortality and 30-day psychiatric readmission with respect to race, gender, and insurance payer type as a proxy for socioeconomic status.

Results: Clinical note topics and psychiatric note topics were heterogenous with respect to race, gender, and insurance payer type, which reflects known clinical findings. Differences in prediction accuracy and therefore machine bias are shown with respect to gender and insurance type for ICU mortality and with respect to insurance policy for psychiatric 30-day readmission.

Conclusions: This analysis can provide a framework for assessing and identifying disparate impacts of artificial intelligence in health care.

Sunday, February 24, 2019

Biased algorithms: here’s a more radical approach to creating fairness

Tom Douglas
theconversation.com
Originally posted January 21, 2019

Here is an excerpt:

What’s fair?

AI researchers concerned about fairness have, for the most part, been focused on developing algorithms that are procedurally fair – fair by virtue of the features of the algorithms themselves, not the effects of their deployment. But what if it’s substantive fairness that really matters?

There is usually a tension between procedural fairness and accuracy – attempts to achieve the most commonly advocated forms of procedural fairness increase the algorithm’s overall error rate. Take the COMPAS algorithm for example. If we equalised the false positive rates between black and white people by ignoring the predictors of recidivism that tended to be disproportionately possessed by black people, the likely result would be a loss in overall accuracy, with more people wrongly predicted to re-offend, or not re-offend.

We could avoid these difficulties if we focused on substantive rather than procedural fairness and simply designed algorithms to maximise accuracy, while simultaneously blocking or compensating for any substantively unfair effects that these algorithms might have. For example, instead of trying to ensure that crime prediction errors affect different racial groups equally – a goal that may in any case be unattainable – we could instead ensure that these algorithms are not used in ways that disadvantage those at high risk. We could offer people deemed “high risk” rehabilitative treatments rather than, say, subjecting them to further incarceration.

The info is here.

Friday, February 22, 2019

Facebook Backs University AI Ethics Institute With $7.5 Million

Sam Shead
Forbes.com
Originally posted January 20, 2019

Facebook is backing an AI ethics institute at the Technical University of Munich with $7.5 million.

The TUM Institute for Ethics in Artificial Intelligence, which was announced on Sunday, will aim to explore fundamental issues affecting the use and impact of AI, Facebook said.

AI is poised to have a profound impact on areas like climate change and healthcare but it has its risks.

"We will explore the ethical issues of AI and develop ethical guidelines for the responsible use of the technology in society and the economy. Our evidence-based research will address issues that lie at the interface of technology and human values," said TUM Professor Dr. Christoph Lütge, who will lead the institute.

"Core questions arise around trust, privacy, fairness or inclusion, for example, when people leave data traces on the internet or receive certain information by way of algorithms. We will also deal with transparency and accountability, for example in medical treatment scenarios, or with rights and autonomy in human decision-making in situations of human-AI interaction."

The info is here.

Wednesday, January 23, 2019

New tech doorbells can record video, and that's an ethics problem

Molly Wood
www.marketplace.org
Originally posted January 17, 2019

Here is an excerpt:

Ring is pretty clear in its terms and conditions that people are allowing Ring employees to access videos, not live streams, but cached videos. And that's in order to train that artificial intelligence to be better at recognizing neighbors, because they're trying to roll out a feature where they use facial recognition to match with the people that are considered safe. So if I have the Ring cameras, I can say, "All these are safe people. Here's pictures of my kids, my neighbors. If it's not one of these people, consider them unsafe." So that's a new technology. They need to be able to train their algorithms to recognize who's a person, what's a car, what's a cat. Some subset of the videos that are being uploaded just for typical usage are then being shared with their research team in the Ukraine.

The info is here.

Friday, January 11, 2019

The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence

Julia Powles
Medium.com
Originally posted December 7, 2018

Here is an excerpt:

There are three problems with this focus on A.I. bias. The first is that addressing bias as a computational problem obscures its root causes. Bias is a social problem, and seeking to solve it within the logic of automation is always going to be inadequate.

Second, even apparent success in tackling bias can have perverse consequences. Take the example of a facial recognition system that works poorly on women of color because of the group’s underrepresentation both in the training data and among system designers. Alleviating this problem by seeking to “equalize” representation merely co-opts designers in perfecting vast instruments of surveillance and classification.

When underlying systemic issues remain fundamentally untouched, the bias fighters simply render humans more machine readable, exposing minorities in particular to additional harms.

Third — and most dangerous and urgent of all — is the way in which the seductive controversy of A.I. bias, and the false allure of “solving” it, detracts from bigger, more pressing questions. Bias is real, but it’s also a captivating diversion.

The info is here.

Thursday, January 10, 2019

Every Leader’s Guide to the Ethics of AI

Thomas H. Davenport and Vivek Katyal
MIT Sloan Management Review Blog
Originally published

Here is an excerpt:

Leaders should ask themselves whether the AI applications they use treat all groups equally. Unfortunately, some AI applications, including machine learning algorithms, put certain groups at a disadvantage. This issue, called algorithmic bias, has been identified in diverse contexts, including judicial sentencing, credit scoring, education curriculum design, and hiring decisions. Even when the creators of an algorithm have not intended any bias or discrimination, they and their companies have an obligation to try to identify and prevent such problems and to correct them upon discovery.

Ad targeting in digital marketing, for example, uses machine learning to make many rapid decisions about what ad is shown to which consumer. Most companies don’t even know how the algorithms work, and the cost of an inappropriately targeted ad is typically only a few cents. However, some algorithms have been found to target high-paying job ads more to men, and others target ads for bail bondsmen to people with names more commonly held by African Americans. The ethical and reputational costs of biased ad-targeting algorithms, in such cases, can potentially be very high.

Of course, bias isn’t a new problem. Companies using traditional decision-making processes have made these judgment errors, and algorithms created by humans are sometimes biased as well. But AI applications, which can create and apply models much faster than traditional analytics, are more likely to exacerbate the issue. The problem becomes even more complex when black box AI approaches make interpreting or explaining the model’s logic difficult or impossible. While full transparency of models can help, leaders who consider their algorithms a competitive asset will quite likely resist sharing them.

The info is here.

Tuesday, January 8, 2019

Algorithmic governance: Developing a research agenda through the power of collective intelligence

John Danaher, Michael J Hogan, Chris Noone, Ronan Kennedy, et.al
Big Data & Society
July–December 2017: 1–21

Abstract

We are living in an algorithmic age where mathematics and computer science are coming together in powerful new ways to influence, shape and guide our behaviour and the governance of our societies. As these algorithmic governance structures proliferate, it is vital that we ensure their effectiveness and legitimacy. That is, we need to ensure that they are an effective means for achieving a legitimate policy goal that are also procedurally fair, open and unbiased. But how can we ensure that algorithmic governance structures are both? This article shares the results of a collective intelligence workshop that addressed exactly this question. The workshop brought together a multidisciplinary group of scholars to consider (a) barriers to legitimate and effective algorithmic governance and (b) the research methods needed to address the nature and impact of specific barriers. An interactive management workshop technique was used to harness the collective intelligence of this multidisciplinary group. This method enabled participants to produce a framework and research agenda for those who are concerned about algorithmic governance. We outline this research agenda below, providing a detailed map of key research themes, questions and methods that our workshop felt ought to be pursued. This builds upon existing work on research agendas for critical algorithm studies in a unique way through the method of collective intelligence.

The paper is here.

Monday, January 7, 2019

The Boundary Between Our Bodies and Our Tech

Kevin Lincoln
Pacific Standard
Originally published November 8, 2018

Here is an excerpt:

"They argued that, essentially, the mind and the self are extended to those devices that help us perform what we ordinarily think of as our cognitive tasks," Lynch says. This can include items as seemingly banal and analog as a piece of paper and a pen, which help us remember, a duty otherwise performed by the brain. According to this philosophy, the shopping list, for example, becomes part of our memory, the mind spilling out beyond the confines of our skull to encompass anything that helps it think.

"Now if that thought is right, it's pretty clear that our minds have become even more radically extended than ever before," Lynch says. "The idea that our self is expanding through our phones is plausible, and that's because our phones, and our digital devices generally—our smartwatches, our iPads—all these things have become a really intimate part of how we go about our daily lives. Intimate in the sense in which they're not only on our body, but we sleep with them, we wake up with them, and the air we breathe is filled, in both a literal and figurative sense, with the trails of ones and zeros that these devices leave behind."

This gets at one of the essential differences between a smartphone and a piece of paper, which is that our relationship with our phones is reciprocal: We not only put information into the device, we also receive information from it, and, in that sense, it shapes our lives far more actively than would, say, a shopping list. The shopping list isn't suggesting to us, based on algorithmic responses to our past and current shopping behavior, what we should buy; the phone is.

The info is here.

Thursday, January 3, 2019

Why We Need to Audit Algorithms

James Guszcza, Iyad Rahwan Will, Bible Manuel Cebrian, & Vic Katyal
Harvard Business Review
Originally published November 28, 2018

Algorithmic decision-making and artificial intelligence (AI) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalism” — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.

Ensuring that societal values are reflected in algorithms and AI technologies will require no less creativity, hard work, and innovation than developing the AI technologies themselves. We have a proposal for a good place to start: auditing. Companies have long been required to issue audited financial statements for the benefit of financial markets and other stakeholders. That’s because — like algorithms — companies’ internal operations appear as “black boxes” to those on the outside. This gives managers an informational advantage over the investing public which could be abused by unethical actors. Requiring managers to report periodically on their operations provides a check on that advantage. To bolster the trustworthiness of these reports, independent auditors are hired to provide reasonable assurance that the reports coming from the “black box” are free of material misstatement. Should we not subject societally impactful “black box” algorithms to comparable scrutiny?

The info is here.

Wednesday, January 2, 2019

The Intuitive Appeal of Explainable Machines

Andrew D. Selbst & Solon Barocas
Fordham Law Review -Volume 87

Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.

Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible.


In most cases, intuition serves as the unacknowledged bridge between a descriptive account to a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself.

The info is here.

Sunday, December 30, 2018

AI thinks like a corporation—and that’s worrying

Jonnie Penn
The Economist
Originally posted November 26, 2018

Here is an excerpt:

Perhaps as a result of this misguided impression, public debates continue today about what value, if any, the social sciences could bring to artificial-intelligence research. In Simon’s view, AI itself was born in social science.

David Runciman, a political scientist at the University of Cambridge, has argued that to understand AI, we must first understand how it operates within the capitalist system in which it is embedded. “Corporations are another form of artificial thinking-machine in that they are designed to be capable of taking decisions for themselves,” he explains.

“Many of the fears that people now have about the coming age of intelligent robots are the same ones they have had about corporations for hundreds of years,” says Mr Runciman. The worry is, these are systems we “never really learned how to control.”

After the 2010 BP oil spill, for example, which killed 11 people and devastated the Gulf of Mexico, no one went to jail. The threat that Mr Runciman cautions against is that AI techniques, like playbooks for escaping corporate liability, will be used with impunity.

Today, pioneering researchers such as Julia Angwin, Virginia Eubanks and Cathy O’Neil reveal how various algorithmic systems calcify oppression, erode human dignity and undermine basic democratic mechanisms like accountability when engineered irresponsibly. Harm need not be deliberate; biased data-sets used to train predictive models also wreak havoc. It may be, given the costly labour required to identify and address these harms, that something akin to “ethics as a service” will emerge as a new cottage industry. Ms O’Neil, for example, now runs her own service that audits algorithms.

The info is here.

Thursday, December 20, 2018

The Truth About Algorithms

Cathy O'Neil
a 3 minute video

Algorithms are opinions, not truth machines, and demand the application of ethics

It can be easy to simply accept algorithms as indisputable mathematic truths. After all, who wants to spend their spare time deconstructing complex equations? But make no mistake: algorithms are limited tools for understanding the world, frequently as flawed and biased as the humans who create and interpret them. In this brief animation, which was adapted from a 2017 presentation at the Royal Society of Arts (RSA) in London, the US data scientist Cathy O’Neil, author of Weapons of Math Destruction (2016), argues that algorithms can be useful tools when thoughtfully deployed. However, their newfound ubiquity and massive power calls for ethical conduct from modellers, regulation and oversight by policymakers, and a more skeptical, mathematics-literate public.


The Truth About Algorithms | Cathy O’Neil from Nice Shit Studio on Vimeo.

Monday, December 3, 2018

Our lack of interest in data ethics will come back to haunt us

Jayson Demers
thenextweb.com
Originally posted November 4, 2018

Here is an excerpt:

Most people understand the privacy concerns that can arise with collecting and harnessing big data, but the ethical concerns run far deeper than that.

These are just a smattering of the ethical problems in big data:

  • Ownership: Who really “owns” your personal data, like what your political preferences are, or which types of products you’ve bought in the past? Is it you? Or is it public information? What about people under the age of 18? What about information you’ve tried to privatize?
  • Bias: Biases in algorithms can have potentially destructive effects. Everything from facial recognition to chatbots can be skewed to favor one demographic over another, or one set of values over another, based on the data used to power it.
  • Transparency: Are companies required to disclose how they collect and use data? Or are they free to hide some of their efforts? More importantly, who gets to decide the answer here?
  • Consent: What does it take to “consent” to having your data harvested? Is your passive use of a platform enough? What about agreeing to multi-page, complexly worded Terms and Conditions documents?

If you haven’t heard about these or haven’t thought much about them, I can’t really blame you. We aren’t bringing these questions to the forefront of the public discussion on big data, nor are big data companies going out of their way to discuss them before issuing new solutions.

“Oops, our bad”

One of the biggest problems we keep running into is what I call the “oops, our bad” effect. The idea here is that big companies and data scientists use and abuse data however they want, outside the public eye and without having an ethical discussion about their practices. If and when the public finds out that some egregious activity took place, there’s usually a short-lived public outcry, and the company issues an apology for the actions — without really changing their practices or making up for the damage.

The info is here.

Wednesday, November 21, 2018

Even The Data Ethics Initiatives Don't Want To Talk About Data Ethics

Kalev Leetaru
Forbes.com
Originally posted October 23, 2018

Two weeks ago, a new data ethics initiative, the Responsible Computer Science Challenge, caught my eye. Funded by the Omidyar Network, Mozilla, Schmidt Futures and Craig Newmark Philanthropies, the initiative will award up to $3.5M to “promising approaches to embedding ethics into undergraduate computer science education, empowering graduating engineers to drive a culture shift in the tech industry and build a healthier internet.” I was immediately excited about a well-funded initiative focused on seeding data ethics into computer science curricula, getting students talking about ethics from the earliest stages of their careers. At the same time, I was concerned about whether even such a high-profile effort could possibly reverse the tide of anti-data-ethics that has taken root in academia and what impact it could realistically have in a world in which universities, publishers, funding agencies and employers have largely distanced themselves from once-sacrosanct data ethics principles like informed consent and the right to opt out. Surprisingly, for an initiative focused on evangelizing ethics, the Challenge declined to answer any of the questions I posed it regarding how it saw its efforts as changing this. Is there any hope left for data ethics when the very initiatives designed to help teach ethics don’t want to talk about ethics?

On its surface, the Responsible Computer Science Challenge seems a tailor-built response to a public rapidly awakening to the incredible damage unaccountable platforms have wreaked upon society. The Challenge describes its focus as “supporting the conceptualization, development, and piloting of curricula that integrate ethics with undergraduate computer science training, educating a new wave of engineers who bring holistic thinking to the design of technology products.”

The info is here.

Friday, November 16, 2018

Re-thinking Data Protection Law in the Age of Big Data and AI

Sandra Wachter and Brent Mittelstadt
Oxford Internet Institute
Originally published October 11, 2018

Numerous applications of ‘Big Data analytics’ drawing potentially troubling inferences about individuals and groups have emerged in recent years.  Major internet platforms are behind many of the highest profile examples: Facebook may be able to infer protected attributes such as sexual orientation, race, as well as political opinions and imminent suicide attempts, while third parties have used Facebook data to decide on the eligibility for loans and infer political stances on abortion. Susceptibility to depression can similarly be inferred via usage data from Facebook and Twitter. Google has attempted to predict flu outbreaks as well as other diseases and their outcomes. Microsoft can likewise predict Parkinson’s disease and Alzheimer’s disease from search engine interactions. Other recent invasive applications include prediction of pregnancy by Target, assessment of users’ satisfaction based on mouse tracking, and China’s far reaching Social Credit Scoring system.

Inferences in the form of assumptions or predictions about future behaviour are often privacy-invasive, sometimes counterintuitive and, in any case, cannot be verified at the time of decision-making. While we are often unable to predict, understand or refute these inferences, they nonetheless impact on our private lives, identity, reputation, and self-determination.

These facts suggest that the greatest risks of Big Data analytics do not stem solely from how input data (name, age, email address) is used. Rather, it is the inferences that are drawn about us from the collected data, which determine how we, as data subjects, are being viewed and evaluated by third parties, that pose the greatest risk. It follows that protections designed to provide oversight and control over how data is collected and processed are not enough; rather, individuals require meaningful protection against not only the inputs, but the outputs of data processing.

The information is here.

Monday, November 5, 2018

We Need To Examine The Ethics And Governance Of Artificial Intelligence

Nikita Malik
forbes.com
Originally posted October 4, 2018

Here is an excerpt:

The second concern is on regulation and ethics. Research teams at MIT and Harvard are already looking into the fast-developing area of AI to map the boundaries within which sensitive but important data can be used. Who determines whether this technology can save lives, for example, versus the very real risk of veering into an Orwellian dystopia?

Take artificial intelligence systems that have the ability to predicate a crime based on an individual’s history, and their propensity to do harm. Pennsylvania could be one of the first states in the United States to base criminal sentences not just on the crimes people are convicted of, but also on whether they are deemed likely to commit additional crimes in the future. Statistically derived risk assessments – based on factors such as age, criminal record, and employment, will help judges determine which sentences to give. This would help reduce the cost of, and burden on, the prison system.

Risk assessments – which have existed for a long time - have been used in other areas such as the prevention of terrorism and child sexual exploitation. In the latter category, existing human systems are so overburdened that children are often overlooked, at grave risk to themselves. Human errors in the case work of the severely abused child Gabriel Fernandez contributed to his eventual death at the hands of his parents, and a serious inquest into the shortcomings of the County Department of Children and Family Services in Los Angeles. Using artificial intelligence in vulnerability assessments of children could aid overworked caseworkers and administrators and flag errors in existing systems.

The info is here.