Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Thursday, May 30, 2019

Confronting bias in judging: A framework for addressing psychological biases in decision making

Tom Stafford, Jules Holroyd, & Robin Scaife
PsyArXiv
Last edited on December 24, 2018

Abstract

Cognitive biases are systematic tendencies of thought which undermine accurate or fair reasoning. An allied concept is that of ‘implicit bias’, which are biases directed at members of particular social identities which may manifest without individual’s endorsement or awareness. This article reviews the literature on cognitive bias, broadly conceived, and makes proposals for how judges might usefully think about avoiding bias in their decision making. Contra some portrayals of cognitive bias as ‘unconscious’ or unknowable, we contend that things can be known about our psychological biases, and steps taken to address them. We argue for the benefits of a unified treatment of cognitive and implicit biases and propose a “3 by 3” framework which can be used by individuals and institutions to review their practice with respect to addressing bias. We emphasise that addressing bias requires an ongoing commitment to monitoring, evaluation and review rather than one­-off interventions.

The research is here.

How Big Tech is struggling with the ethics of AI

Madhumita Murgia & Siddarth Shrkianth
Financial Times
Originally posted April 28, 2019

Here is an excerpt:

The development and application of AI is causing huge divisions both inside and outside tech companies, and Google is not alone in struggling to find an ethical approach.

The companies that are leading research into AI in the US and China, including Google, Amazon, Microsoft, Baidu, SenseTime and Tencent, have taken very different approaches to AI and to whether to develop technology that can ultimately be used for military and surveillance purposes.

For instance, Google has said it will not sell facial recognition services to governments, while Amazon and Microsoft both do so. They have also been attacked for the algorithmic bias of their programmes, where computers inadvertently propagate bias through unfair or corrupt data inputs.

In response to criticism not only from campaigners and academics but also their own staff, companies have begun to self-regulate by trying to set up their own “AI ethics” initiatives that perform roles ranging from academic research — as in the case of Google-owned DeepMind’s Ethics and Society division — to formulating guidelines and convening external oversight panels.

The info is here.

Wednesday, May 29, 2019

Why Do We Need Wisdom To Lead In The Future?

Sesil Pir
Forbes.com
Originally posted May 19, 2019

Here is an excerpt:

We live in a society that encourages us to think about how to have a great career but leaves us inarticulate about how to cultivate the inner life. The road to success is definitively paved through competition and so fiercely that it becomes all-consuming for many of us. It is commonly accepted today that information is the key source of all being; yet, information alone doesn’t laver one with knowledge as knowledge alone doesn’t lead to righteous action. In the age of artificial information, we need to consider beyond data to drive purposeful progression and authentic illuminations.

Wisdom in the context of leadership refers to our quality of having good, sound judgment. It is a source that provides light into our own insight and introduces a new appreciation for the world around us. It helps us recognize that others are more than our limiting impressions of them. It fills us with confidence that we are connected and better capable than we could ever dream of.

The people with this quality tends to lead from a place of strong internal cohesion. They have overcome fragmentation to reach a level of integration, which supports the way they show up – tranquil, settled and rooted. These people tend to withstand the hard winds of volatility and not easily crumble in the face of adversity. They ground their thoughts, emotions and behaviors in values that feed their self-efficacy and they heartfully understand perfectionism is an unattainable goal.

The info is here.

The Problem with Facebook


Making Sense Podcast

Originally posted on March 27, 2019

In this episode of the Making Sense podcast, Sam Harris speaks with Roger McNamee about his book Zucked: Waking Up to the Facebook Catastrophe.

Roger McNamee has been a Silicon Valley investor for thirty-five years. He has cofounded successful venture funds including Elevation with U2’s Bono. He was a former mentor to Facebook CEO Mark Zuckerberg and helped recruit COO Sheryl Sandberg to the company. He holds a B.A. from Yale University and an M.B.A. from the Tuck School of Business at Dartmouth College.

The podcast is here.

The fundamental ethical problems with social media companies like Facebook and Google start about 20 minutes into the podcast.

Tuesday, May 28, 2019

Should Students Take Smart Drugs?

Darian Meacham
www.philosophersmag.com
Originally posted December 8, 2017

If this were a straightforward question, you would not be reading about it in a philosophy magazine. But you are, so it makes sense that we try to clarify the terms of the discussion before wading in too far. Unfortunately (or fortunately depending on how you look at it), when philosophers set out to de-obfuscate what look to be relatively forthright questions, things usually get more complicated rather than less: each of the operative terms at stake in the question, ‘should students take smart drugs?’ opens us up onto larger debates about the nature of medicine, health, education, learning, and creativity as well as economic, political and social structures and norms. So, in a sense, a seemingly rather narrow question about a relatively peripheral issue in the education sector morphs into a much larger question about how we think about and value learning; what constitutes psychiatric illness and in what ways should we deal with it; and what sort of productivity should educational institutions like universities, but also secondary and even primary schools value and be oriented towards?

The first question that needs to be addressed is what is a ‘smart drug’? I have in mind two things when I use the term here:

(1) On the one hand, existing psychostimulants normally prescribed for children and adults with a variety of conditions, most prominently ADHD (Attention Deficit Hyperactivity Disorder), but also various others like narcolepsy, sleep-work disorder and schizophrenia. Commonly known by brand and generic names like Adderall, Ritalin, and Modafinil, these drugs are often sold off-label or on the grey market for what could be called non-medical or ‘enhancement’ purposes. The off-label use of psychostimulants for cognitive enhancement purposes is reported to be quite widespread in the USA. So the debate over the use of smart drugs is very much tied up with debates about how the behavioural and cognitive disorders for which these drugs are prescribed are diagnosed and what the causes of such conditions are.

(2) On the other hand, the philosophical-ethical debate around smart drugs need not be restricted to currently existing technologies. Broader issues at stake in the debate allow us to reflect on questions surrounding possible future cognitive enhancement technologies, and even much older ones. In this sense, the question about the use of smart drugs situates itself in a broader discussion about cognitive enhancement and enhancement in general.

The info is here.

Values in the Filter Bubble Ethics of Personalization Algorithms in Cloud Computing

Engin Bozdag and Job Timmermans
Delft University of Technology
Faculty of Technology, Policy and Management

Abstract

Cloud services such as Facebook and Google search started to use personalization algorithms in order to deal with growing amount of data online. This is often done in order to reduce the “information overload”. User’s interaction with the system is recorded in a single identity, and the information is personalized for the user using this identity. However, as we argue, such filters often ignore the context of information and they are never value neutral. These algorithms operate without the control and knowledge of the user, leading to a “filter bubble”. In this paper we use Value Sensitive Design methodology to identify the values and value assumptions implicated in personalization algorithms. By building on existing philosophical work, we discuss three human values implicated in personalized filtering: autonomy, identity, and transparency.

A copy of the paper is here.

Monday, May 27, 2019

How To Prevent AI Ethics Councils From Failing

uncaptionedManoj Saxena
www.forbes.com
Originally posted April 30, 2019

There's nothing “artificial” about Artificial Intelligence-infused systems. These systems are real and are already impacting everyone today though automated predictions and decisions. However, these digital brains may demonstrate unpredictable behaviors that can be disruptive, confusing, offensive, and even dangerous. Therefore, before getting started with an AI strategy, it’s critical for companies to consider AI through the lens of building systems you can trust.

Educate on the criticality of a ‘people and ethics first’ approach

AI systems often function in oblique, invisible ways that may harm the most vulnerable. Examples of such harm include loss of privacy, discrimination, loss of skills, economic impact, the security of critical infrastructure, and long-term effects on social well-being.

The ‘’technology and monetization first approach’’ to AI needs to evolve to a “people and ethics first” approach. Ethically aligned design is a set of societal and policy guidelines for the development of intelligent and autonomous systems to ensure such systems serve humanity’s values and ethical principles.

Multiple noteworthy organizations and countries have proposed guidelines for developing trusted AI systems going forward. These include the IEEE, World Economic Forum, the Future of Life Institute, Alan Turing Institute, AI Global, and the Government of Canada. Once you have your guidelines in place, you can then start educating everyone internally and externally about the promise and perils of AI and the need for an AI ethics council.

The info is here.

Sunday, May 26, 2019

Brain science should be making prisons better, not trying to prove innocence

Arielle Baskin-Sommers
theconversaton.com
Originally posted November 1, 2017

Here is an excerpt:

Unfortunately, when neuroscientific assessments are presented to the court, they can sway juries, regardless of their relevance. Using these techniques to produce expert evidence doesn’t bring the court any closer to truth or justice. And with a single brain scan costing thousands of dollars, plus expert interpretation and testimony, it’s an expensive tool out of reach for many defendants. Rather than helping untangle legal responsibility, neuroscience here causes an even deeper divide between the rich and the poor, based on pseudoscience.

While I remain skeptical about the use of neuroscience in the judicial process, there are a number of places where its findings could help corrections systems develop policies and practices based on evidence.

Solitary confinement harms more than helps

Take, for instance, the use within prisons of solitary confinement as a punishment for disciplinary infractions. In 2015, the Bureau of Justice reported that nearly 20 percent of federal and state prisoners and 18 percent of local jail inmates spent time in solitary.

Research consistently demonstrates that time spent in solitary increases the chances of persistent emotional trauma and distress. Solitary can lead to hallucinations, fantasies and paranoia; it can increase anxiety, depression and apathy as well as difficulties in thinking, concentrating, remembering, paying attention and controlling impulses. People placed in solitary are more likely to engage in self-mutilation as well as exhibit chronic rage, anger and irritability. The term “isolation syndrome” has even been coined to capture the severe and long-lasting effects of solitary.

The info is here.

Saturday, May 25, 2019

Lost-in-the-mall: False memory or false defense?

Ruth A. Blizard & Morgan Shaw (2019)
Journal of Child Custody
DOI: 10.1080/15379418.2019.1590285

Abstract

False Memory Syndrome (FMS) and Parental Alienation Syndrome (PAS) were developed as defenses for parents accused of child abuse as part of a larger movement to undermine prosecution of child abuse. The lost-in-the-mall study by Dr. Elizabeth Loftus concludes that an entire false memory can be implanted by suggestion. It has since been used to discredit abuse survivors’ testimony by inferring that false memories for childhood abuse can be implanted by psychotherapists. Examination of the research methods and findings of the study shows that no full false memories were actually formed. Similarly, PAS, coined by Richard Gardner, is frequently used in custody cases to discredit children’s testimony by alleging that the protective parent coached them to have false memories of abuse. There is no scientific research demonstrating the existence of PAS, and, in fact, studies on the suggestibility of children show that they cannot easily be persuaded to provide detailed disclosures of abuse.

The info is here.