Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Social Systems. Show all posts
Showing posts with label Social Systems. Show all posts

Saturday, September 19, 2020

Don’t ask if artificial intelligence is good or fair, ask how it shifts power

Pratyusha Kalluri
nature.com
Originally posted 7 July 20

Here is an excerpt:

Researchers in AI overwhelmingly focus on providing highly accurate information to decision makers. Remarkably little research focuses on serving data subjects. What’s needed are ways for these people to investigate AI, to contest it, to influence it or to even dismantle it. For example, the advocacy group Our Data Bodies is putting forward ways to protect personal data when interacting with US fair-housing and child-protection services. Such work gets little attention. Meanwhile, mainstream research is creating systems that are extraordinarily expensive to train, further empowering already powerful institutions, from Amazon, Google and Facebook to domestic surveillance and military programmes.

Many researchers have trouble seeing their intellectual work with AI as furthering inequity. Researchers such as me spend our days working on what are, to us, mathematically beautiful and useful systems, and hearing of AI success stories, such as winning Go championships or showing promise in detecting cancer. It is our responsibility to recognize our skewed perspective and listen to those impacted by AI.

Through the lens of power, it’s possible to see why accurate, generalizable and efficient AI systems are not good for everyone. In the hands of exploitative companies or oppressive law enforcement, a more accurate facial recognition system is harmful. Organizations have responded with pledges to design ‘fair’ and ‘transparent’ systems, but fair and transparent according to whom? These systems sometimes mitigate harm, but are controlled by powerful institutions with their own agendas. At best, they are unreliable; at worst, they masquerade as ‘ethics-washing’ technologies that still perpetuate inequity.

Already, some researchers are exposing hidden limitations and failures of systems. They braid their research findings with advocacy for AI regulation. Their work includes critiquing inadequate technological ‘fixes’. Other researchers are explaining to the public how natural resources, data and human labour are extracted to create AI.

The info is here.

Saturday, March 7, 2020

Ethical guidelines for social justice in psychology

Hailes, H. and others
Professional Psychology:
Research and Practice

Abstract

As the field of psychology increasingly recognizes the importance of engaging in work that advances social justice and as social justice-focused training and practice in the field grows, psychologists need ethical guidelines for this work. The American Psychological Association’s ethical principles include “justice” as a core principle but do not expand extensively upon its implications. This article provides a proposed set of ethical guidelines for social justice work in psychology. Within the framework of 3 domains of justice—interactional (about relational dynamics), distributive (about provision for all), and procedural (about just processes) justice—this article outlines 7 guidelines for social justice ethics: (1) reflecting critically on relational power dynamics; (2) mitigating relational power dynamics; (3) focusing on empowerment and strengths-based approaches; (4) focusing energy and resources on the priorities of marginalized communities; (5) contributing time, funding, and effort to preventive work; (6) engaging with social systems; and (7) raising awareness about system impacts on individual and community well-being. Vignettes of relevant ethical dilemmas are presented and implications for practice are discussed.

This article explores the need for a set of ethical standards to guide psychologists’ social justice-oriented work. It conceptualizes social justice as having three components, focused on relational dynamics, provision for all, and just processes. Additionally, it outlines and provides examples of seven proposed standards for social justice ethics in psychology.

The article is here.

Saturday, July 21, 2018

Bias detectives: the researchers striving to make algorithms fair

Rachel Courtland
Nature.com
Originally posted

Here is an excerpt:

“What concerns me most is the idea that we’re coming up with systems that are supposed to ameliorate problems [but] that might end up exacerbating them,” says Kate Crawford, co-founder of the AI Now Institute, a research centre at New York University that studies the social implications of artificial intelligence.

With Crawford and others waving red flags, governments are trying to make software more accountable. Last December, the New York City Council passed a bill to set up a task force that will recommend how to publicly share information about algorithms and investigate them for bias. This year, France’s president, Emmanuel Macron, has said that the country will make all algorithms used by its government open. And in guidance issued this month, the UK government called for those working with data in the public sector to be transparent and accountable. Europe’s General Data Protection Regulation (GDPR), which came into force at the end of May, is also expected to promote algorithmic accountability.

In the midst of such activity, scientists are confronting complex questions about what it means to make an algorithm fair. Researchers such as Vaithianathan, who work with public agencies to try to build responsible and effective software, must grapple with how automated tools might introduce bias or entrench existing inequity — especially if they are being inserted into an already discriminatory social system.

The information is here.