Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Friday, May 31, 2019

The Ethics of Smart Devices That Analyze How We Speak

Trevor Cox
Harvard Business Review
Originally posted May 20, 2019

Here is an excerpt:

But what happens when machines start analyzing how we talk? The big tech firms are coy about exactly what they are planning to detect in our voices and why, but Amazon has a patent that lists a range of traits they might collect, including identity (“gender, age, ethnic origin, etc.”), health (“sore throat, sickness, etc.”), and feelings, (“happy, sad, tired, sleepy, excited, etc.”).

This worries me — and it should worry you, too — because algorithms are imperfect. And voice is particularly difficult to analyze because the signals we give off are inconsistent and ambiguous. What’s more, the inferences that even humans make are distorted by stereotypes. Let’s use the example of trying to identify sexual orientation. There is a style of speaking with raised pitch and swooping intonations which some people assume signals a gay man. But confusion often arises because some heterosexuals speak this way, and many homosexuals don’t. Science experiments show that human aural “gaydar” is only right about 60% of the time. Studies of machines attempting to detect sexual orientation from facial images have shown a success rate of about 70%. Sound impressive? Not to me, because that means those machines are wrong 30% of the time. And I would anticipate success rates to be even lower for voices, because how we speak changes depending on who we’re talking to. Our vocal anatomy is very flexible, which allows us to be oral chameleons, subconsciously changing our voices to fit in better with the person we’re speaking with.

The info is here.

Can Science Explain Morality?

Charles Glenn
National Review
Originally published May 2, 2019

Here is an excerpt:

Useful as these studies can be, however, they leave us with a diminished sense of the moral weight of human personhood. Right and wrong, good and evil, and so forth are human constructs that derive from human evolutionary history, the cognitive architecture of human language, neurochemistry and neuroanatomy, and contingent human interests. Thus the fundamental source of morality is not outside human experience and biology. There are no real rights, duties, or valuable things out in the world. The nature and quality of moral attitudes — thinking, feeling, or believing that something is either moral or immoral — can be explained psychologically and culturally.

That is, good and evil have no anchor in the basic structure and significance of our existence (indeed, existence itself has no significance) but are entirely contingent. This leaves us in a vacuum of purpose, one that we can easily see reflected in the hedonistic confusion of contemporary culture. “Are there really things we should and shouldn’t do beyond what would best serve our interests and preferences?” we might well ask. “Are some things valuable in an objective sense, beyond what we happen to want or care about?”

The information is here.

Thursday, May 30, 2019

Confronting bias in judging: A framework for addressing psychological biases in decision making

Tom Stafford, Jules Holroyd, & Robin Scaife
PsyArXiv
Last edited on December 24, 2018

Abstract

Cognitive biases are systematic tendencies of thought which undermine accurate or fair reasoning. An allied concept is that of ‘implicit bias’, which are biases directed at members of particular social identities which may manifest without individual’s endorsement or awareness. This article reviews the literature on cognitive bias, broadly conceived, and makes proposals for how judges might usefully think about avoiding bias in their decision making. Contra some portrayals of cognitive bias as ‘unconscious’ or unknowable, we contend that things can be known about our psychological biases, and steps taken to address them. We argue for the benefits of a unified treatment of cognitive and implicit biases and propose a “3 by 3” framework which can be used by individuals and institutions to review their practice with respect to addressing bias. We emphasise that addressing bias requires an ongoing commitment to monitoring, evaluation and review rather than one­-off interventions.

The research is here.

How Big Tech is struggling with the ethics of AI

Madhumita Murgia & Siddarth Shrkianth
Financial Times
Originally posted April 28, 2019

Here is an excerpt:

The development and application of AI is causing huge divisions both inside and outside tech companies, and Google is not alone in struggling to find an ethical approach.

The companies that are leading research into AI in the US and China, including Google, Amazon, Microsoft, Baidu, SenseTime and Tencent, have taken very different approaches to AI and to whether to develop technology that can ultimately be used for military and surveillance purposes.

For instance, Google has said it will not sell facial recognition services to governments, while Amazon and Microsoft both do so. They have also been attacked for the algorithmic bias of their programmes, where computers inadvertently propagate bias through unfair or corrupt data inputs.

In response to criticism not only from campaigners and academics but also their own staff, companies have begun to self-regulate by trying to set up their own “AI ethics” initiatives that perform roles ranging from academic research — as in the case of Google-owned DeepMind’s Ethics and Society division — to formulating guidelines and convening external oversight panels.

The info is here.

Wednesday, May 29, 2019

Why Do We Need Wisdom To Lead In The Future?

Sesil Pir
Forbes.com
Originally posted May 19, 2019

Here is an excerpt:

We live in a society that encourages us to think about how to have a great career but leaves us inarticulate about how to cultivate the inner life. The road to success is definitively paved through competition and so fiercely that it becomes all-consuming for many of us. It is commonly accepted today that information is the key source of all being; yet, information alone doesn’t laver one with knowledge as knowledge alone doesn’t lead to righteous action. In the age of artificial information, we need to consider beyond data to drive purposeful progression and authentic illuminations.

Wisdom in the context of leadership refers to our quality of having good, sound judgment. It is a source that provides light into our own insight and introduces a new appreciation for the world around us. It helps us recognize that others are more than our limiting impressions of them. It fills us with confidence that we are connected and better capable than we could ever dream of.

The people with this quality tends to lead from a place of strong internal cohesion. They have overcome fragmentation to reach a level of integration, which supports the way they show up – tranquil, settled and rooted. These people tend to withstand the hard winds of volatility and not easily crumble in the face of adversity. They ground their thoughts, emotions and behaviors in values that feed their self-efficacy and they heartfully understand perfectionism is an unattainable goal.

The info is here.

The Problem with Facebook


Making Sense Podcast

Originally posted on March 27, 2019

In this episode of the Making Sense podcast, Sam Harris speaks with Roger McNamee about his book Zucked: Waking Up to the Facebook Catastrophe.

Roger McNamee has been a Silicon Valley investor for thirty-five years. He has cofounded successful venture funds including Elevation with U2’s Bono. He was a former mentor to Facebook CEO Mark Zuckerberg and helped recruit COO Sheryl Sandberg to the company. He holds a B.A. from Yale University and an M.B.A. from the Tuck School of Business at Dartmouth College.

The podcast is here.

The fundamental ethical problems with social media companies like Facebook and Google start about 20 minutes into the podcast.

Tuesday, May 28, 2019

Should Students Take Smart Drugs?

Darian Meacham
www.philosophersmag.com
Originally posted December 8, 2017

If this were a straightforward question, you would not be reading about it in a philosophy magazine. But you are, so it makes sense that we try to clarify the terms of the discussion before wading in too far. Unfortunately (or fortunately depending on how you look at it), when philosophers set out to de-obfuscate what look to be relatively forthright questions, things usually get more complicated rather than less: each of the operative terms at stake in the question, ‘should students take smart drugs?’ opens us up onto larger debates about the nature of medicine, health, education, learning, and creativity as well as economic, political and social structures and norms. So, in a sense, a seemingly rather narrow question about a relatively peripheral issue in the education sector morphs into a much larger question about how we think about and value learning; what constitutes psychiatric illness and in what ways should we deal with it; and what sort of productivity should educational institutions like universities, but also secondary and even primary schools value and be oriented towards?

The first question that needs to be addressed is what is a ‘smart drug’? I have in mind two things when I use the term here:

(1) On the one hand, existing psychostimulants normally prescribed for children and adults with a variety of conditions, most prominently ADHD (Attention Deficit Hyperactivity Disorder), but also various others like narcolepsy, sleep-work disorder and schizophrenia. Commonly known by brand and generic names like Adderall, Ritalin, and Modafinil, these drugs are often sold off-label or on the grey market for what could be called non-medical or ‘enhancement’ purposes. The off-label use of psychostimulants for cognitive enhancement purposes is reported to be quite widespread in the USA. So the debate over the use of smart drugs is very much tied up with debates about how the behavioural and cognitive disorders for which these drugs are prescribed are diagnosed and what the causes of such conditions are.

(2) On the other hand, the philosophical-ethical debate around smart drugs need not be restricted to currently existing technologies. Broader issues at stake in the debate allow us to reflect on questions surrounding possible future cognitive enhancement technologies, and even much older ones. In this sense, the question about the use of smart drugs situates itself in a broader discussion about cognitive enhancement and enhancement in general.

The info is here.

Values in the Filter Bubble Ethics of Personalization Algorithms in Cloud Computing

Engin Bozdag and Job Timmermans
Delft University of Technology
Faculty of Technology, Policy and Management

Abstract

Cloud services such as Facebook and Google search started to use personalization algorithms in order to deal with growing amount of data online. This is often done in order to reduce the “information overload”. User’s interaction with the system is recorded in a single identity, and the information is personalized for the user using this identity. However, as we argue, such filters often ignore the context of information and they are never value neutral. These algorithms operate without the control and knowledge of the user, leading to a “filter bubble”. In this paper we use Value Sensitive Design methodology to identify the values and value assumptions implicated in personalization algorithms. By building on existing philosophical work, we discuss three human values implicated in personalized filtering: autonomy, identity, and transparency.

A copy of the paper is here.

Monday, May 27, 2019

How To Prevent AI Ethics Councils From Failing

uncaptionedManoj Saxena
www.forbes.com
Originally posted April 30, 2019

There's nothing “artificial” about Artificial Intelligence-infused systems. These systems are real and are already impacting everyone today though automated predictions and decisions. However, these digital brains may demonstrate unpredictable behaviors that can be disruptive, confusing, offensive, and even dangerous. Therefore, before getting started with an AI strategy, it’s critical for companies to consider AI through the lens of building systems you can trust.

Educate on the criticality of a ‘people and ethics first’ approach

AI systems often function in oblique, invisible ways that may harm the most vulnerable. Examples of such harm include loss of privacy, discrimination, loss of skills, economic impact, the security of critical infrastructure, and long-term effects on social well-being.

The ‘’technology and monetization first approach’’ to AI needs to evolve to a “people and ethics first” approach. Ethically aligned design is a set of societal and policy guidelines for the development of intelligent and autonomous systems to ensure such systems serve humanity’s values and ethical principles.

Multiple noteworthy organizations and countries have proposed guidelines for developing trusted AI systems going forward. These include the IEEE, World Economic Forum, the Future of Life Institute, Alan Turing Institute, AI Global, and the Government of Canada. Once you have your guidelines in place, you can then start educating everyone internally and externally about the promise and perils of AI and the need for an AI ethics council.

The info is here.