Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Explainability. Show all posts
Showing posts with label Explainability. Show all posts

Wednesday, September 11, 2019

How The Software Industry Must Marry Ethics With Artificial Intelligence

Christian Pedersen
Forbes.com
Originally posted July 15, 2019

Here is an excerpt:

Companies developing software used to automate business decisions and processes, military operations or other serious work need to address explainability and human control over AI as they weave it into their products. Some have started to do this.

As AI is introduced into existing software environments, those application environments can help. Many will have established preventive and detective controls and role-based security. They can track who made what changes to processes or to the data that feeds through those processes. Some of these same pathways can be used to document changes made to goals, priorities or data given to AI.

But software vendors have a greater opportunity. They can develop products that prevent bad use of AI, but they can also use AI to actively protect and aid people, business and society. AI can be configured to solve for anything from overall equipment effectiveness or inventory reorder point to yield on capital. Why not have it solve for nonfinancial, corporate social responsibility metrics like your environmental footprint or your environmental or economic impact? Even a common management practice like using a balanced scorecard could help AI strive toward broader business goals that consider the well-being of customers, employees, customers, suppliers and other stakeholders.

The info is here.

Thursday, August 1, 2019

Ethics in the Age of Artificial Intelligence

Shohini Kundu
Scientific American
Originally published July 3, 2019

Here is an except:

Unfortunately, in that decision-making process, AI also took away the transparency, explainability, predictability, teachability and auditability of the human move, replacing it with opacity. The logic for the move is not only unknown to the players, but also unknown to the creators of the program. As AI makes decisions for us, transparency and predictability of decision-making may become a thing of the past.

Imagine a situation in which your child comes home to you and asks for an allowance to go see a movie with her friends. You oblige. A week later, your other child comes to you with the same request, but this time, you decline. This will immediately raise the issue of unfairness and favoritism. To avoid any accusation of favoritism, you explain to your child that she must finish her homework before qualifying for any pocket money.

Without any explanation, there is bound to be tension in the family. Now imagine replacing your role with an AI system that has gathered data from thousands of families in similar situations. By studying the consequence of allowance decisions on other families, it comes to the conclusion that one sibling should get the pocket money while the other sibling should not.

But the AI system cannot really explain the reasoning—other than to say that it weighed your child’s hair color, height, weight and all other attributes that it has access to in arriving at a decision that seems to work best for other families. How is that going to work?

The info is here.

Monday, May 27, 2019

How To Prevent AI Ethics Councils From Failing

uncaptionedManoj Saxena
www.forbes.com
Originally posted April 30, 2019

There's nothing “artificial” about Artificial Intelligence-infused systems. These systems are real and are already impacting everyone today though automated predictions and decisions. However, these digital brains may demonstrate unpredictable behaviors that can be disruptive, confusing, offensive, and even dangerous. Therefore, before getting started with an AI strategy, it’s critical for companies to consider AI through the lens of building systems you can trust.

Educate on the criticality of a ‘people and ethics first’ approach

AI systems often function in oblique, invisible ways that may harm the most vulnerable. Examples of such harm include loss of privacy, discrimination, loss of skills, economic impact, the security of critical infrastructure, and long-term effects on social well-being.

The ‘’technology and monetization first approach’’ to AI needs to evolve to a “people and ethics first” approach. Ethically aligned design is a set of societal and policy guidelines for the development of intelligent and autonomous systems to ensure such systems serve humanity’s values and ethical principles.

Multiple noteworthy organizations and countries have proposed guidelines for developing trusted AI systems going forward. These include the IEEE, World Economic Forum, the Future of Life Institute, Alan Turing Institute, AI Global, and the Government of Canada. Once you have your guidelines in place, you can then start educating everyone internally and externally about the promise and perils of AI and the need for an AI ethics council.

The info is here.

Monday, March 25, 2019

Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability

Alex John London
The Hastings Center Report
Volume49, Issue1, January/February 2019, Pages 15-21

Abstract

Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or a rationale for particular decisions in individual cases, some commentators regard ceding medical decision‐making to black box systems as contravening the profound moral responsibilities of clinicians. I argue, however, that opaque decisions are more common in medicine than critics realize. Moreover, as Aristotle noted over two millennia ago, when our knowledge of causal systems is incomplete and precarious—as it often is in medicine—the ability to explain how results are produced can be less important than the ability to produce such results and empirically verify their accuracy.

The info is here.

Friday, March 22, 2019

Pop Culture, AI And Ethics

Phaedra Boinodiris
Forbes.com
Originally published February 24, 2019

Here is an excerpt:


5 Areas of Ethical Focus

The guide goes on to outline five areas of ethical focus or consideration:

Accountability – there is a group responsible for ensuring that REAL guests in the hotel are interviewed to determine their needs. When feedback is negative this group implements a feedback loop to better understand preferences. They ensure that at any point in time, a guest can turn the AI off.

Fairness – If there is bias in the system, the accountable team must take the time to train with a larger, more diverse set of data.Ensure that the data collected about a user's race, gender, etc. in combination with their usage of the AI, will not be used to market to or exclude certain demographics.

Explainability and Enforced Transparency – if a guest doesn’t like the AI’s answer, she can ask how it made that recommendation using which dataset. A user must explicitly opt in to use the assistant and provide the guest options to consent on what information to gather.

User Data Rights – The hotel does not own a guest’s data and a guest has the right to have the system purges at any time. Upon request, a guest can receive a summary of what information was gathered by the Ai assistant.

Value Alignment – Align the experience to the values of the hotel. The hotel values privacy and ensuring that guests feel respected and valued. Make it clear that the AI assistant is not designed to keep data or monitor guests. Relay how often guest data is auto deleted. Ensure that the AI can speak in the guest’s respective language.

The info is here.

Monday, May 14, 2018

Computer Says No: Part 2 - Explainability

Jasmine Leonard
theRSA.org
Originally posted March 23, 2018

Here is an expert:

The trouble is, since many decisions should be explainable, it’s tempting to assume that automated decision systems should also be explainable.  But as discussed earlier, automated decisions systems don’t actually make decisions; they make predictions.  And when a prediction is used to guide a decision, the prediction is itself part of the explanation for that decision.  It therefore doesn’t need to be explained itself, it merely needs to be justifiable.

This is a subtle but important distinction.  To illustrate it, imagine you were to ask your doctor to explain her decision to prescribe you a particular drug.  She could do so by saying that the drug had cured many other people with similar conditions in the past and that she therefore predicted it would cure you too.  In this case, her prediction that the drug will cure you is the explanation for her decision to prescribe it.  And it’s a good explanation because her prediction is justified – not on the basis of an explanation of how the drug works, but on the basis that it’s proven to be effective in previous cases.  Indeed, explanations of how drugs work are often not available because the biological mechanisms by which they operate are poorly understood, even by those who produce them.  Moreover, even if your doctor could explain how the drug works, unless you have considerable knowledge of pharmacology, it’s unlikely that the explanation would actually increase your understanding of her decision to prescribe the drug.

If explanations of predictions are unnecessary to justify their use in decision-making then, what else can justify the use of a prediction made by an automated decision system?  The best answer, I believe, is that the system is shown to be sufficiently accurate.  What “sufficiently accurate” means is obviously up for debate, but at a minimum I would suggest that it means the system’s predictions are at least as accurate as those produced by a trained human.  It also means that there are no other readily available systems that produce more accurate predictions.

The article is here.