Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, May 27, 2019

How To Prevent AI Ethics Councils From Failing

uncaptionedManoj Saxena
Originally posted April 30, 2019

There's nothing “artificial” about Artificial Intelligence-infused systems. These systems are real and are already impacting everyone today though automated predictions and decisions. However, these digital brains may demonstrate unpredictable behaviors that can be disruptive, confusing, offensive, and even dangerous. Therefore, before getting started with an AI strategy, it’s critical for companies to consider AI through the lens of building systems you can trust.

Educate on the criticality of a ‘people and ethics first’ approach

AI systems often function in oblique, invisible ways that may harm the most vulnerable. Examples of such harm include loss of privacy, discrimination, loss of skills, economic impact, the security of critical infrastructure, and long-term effects on social well-being.

The ‘’technology and monetization first approach’’ to AI needs to evolve to a “people and ethics first” approach. Ethically aligned design is a set of societal and policy guidelines for the development of intelligent and autonomous systems to ensure such systems serve humanity’s values and ethical principles.

Multiple noteworthy organizations and countries have proposed guidelines for developing trusted AI systems going forward. These include the IEEE, World Economic Forum, the Future of Life Institute, Alan Turing Institute, AI Global, and the Government of Canada. Once you have your guidelines in place, you can then start educating everyone internally and externally about the promise and perils of AI and the need for an AI ethics council.

The info is here.