Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, July 25, 2018

Descartes was wrong: ‘a person is a person through other persons’

Abeba Birhane
aeon.com
Originally published April 7, 2017

Here is an excerpt:

So reality is not simply out there, waiting to be uncovered. ‘Truth is not born nor is it to be found inside the head of an individual person, it is born between people collectively searching for truth, in the process of their dialogic interaction,’ Bakhtin wrote in Problems of Dostoevsky’s Poetics (1929). Nothing simply is itself, outside the matrix of relationships in which it appears. Instead, being is an act or event that must happen in the space between the self and the world.

Accepting that others are vital to our self-perception is a corrective to the limitations of the Cartesian view. Consider two different models of child psychology. Jean Piaget’s theory of cognitive development conceives of individual growth in a Cartesian fashion, as the reorganisation of mental processes. The developing child is depicted as a lone learner – an inventive scientist, struggling independently to make sense of the world. By contrast, ‘dialogical’ theories, brought to life in experiments such as Lisa Freund’s ‘doll house study’ from 1990, emphasise interactions between the child and the adult who can provide ‘scaffolding’ for how she understands the world.

A grimmer example might be solitary confinement in prisons. The punishment was originally designed to encourage introspection: to turn the prisoner’s thoughts inward, to prompt her to reflect on her crimes, and to eventually help her return to society as a morally cleansed citizen. A perfect policy for the reform of Cartesian individuals.

The information is here.

Heuristics and Public Policy: Decision Making Under Bounded Rationality

Sanjit Dhami, Ali al-Nowaihi, and Cass Sunstein
SSRN.com
Posted June 20, 2018

Abstract

How do human beings make decisions when, as the evidence indicates, the assumptions of the Bayesian rationality approach in economics do not hold? Do human beings optimize, or can they? Several decades of research have shown that people possess a toolkit of heuristics to make decisions under certainty, risk, subjective uncertainty, and true uncertainty (or Knightian uncertainty). We outline recent advances in knowledge about the use of heuristics and departures from Bayesian rationality, with particular emphasis on growing formalization of those departures, which add necessary precision. We also explore the relationship between bounded rationality and libertarian paternalism, or nudges, and show that some recent objections, founded on psychological work on the usefulness of certain heuristics, are based on serious misunderstandings.

The article can be downloaded here.

Tuesday, July 24, 2018

Amazon, Google and Microsoft Employee AI Ethics Are Best Hope For Humanity

Paul Armstrong
Forbes.com
Originally posted June 26, 2018

Here is an excerpt:

Google recently lost the 'Don't be Evil' from its Code of Conduct documents but what were once guiding words now appear to be afterthoughts, and they aren't alone. From drone use to deals with the immigration services, large tech companies are looking to monetise their creations and who can blame them - projects can cost double digit millions as companies look to maintain an edge in a continually evolving marketplace. Employees are not without a conscience it seems, and as talent becomes the one thing that companies need in this war, that power needs to wielded, or we risk runaway train scenarios. If you want an idea of where things could go read this.

China is using AI software and facial recognition to determine who can travel, using what and where. You might think this is a ways away from being used on US or UK soil, but you'd be wrong. London has cameras on pretty much all streets, and the US has Amazon's Rekognition (Orlando just abandoned its use, but other tests remain active). Employees need to be the conscious of large entities and not only the ACLU or civil liberties inclined. From racist AI to faked video using machine learning to create better fakes, how you form technology matters as much as the why. Google has already mastered the technology to convince a human it is not talking to a robot thanks to um's and ah's - Google's next job is to convince us that is a good thing.

The information is here.

Data ethics is more than just what we do with data, it’s also about who’s doing it

James Arvantitakis, Andrew Francis, and Oliver Obst
The Conversation
Originally posted June 21, 2018

If the recent Cambridge Analytica data scandal has taught us anything, it’s that the ethical cultures of our largest tech firms need tougher scrutiny.

But moral questions about what data should be collected and how it should be used are only the beginning. They raise broader questions about who gets to make those decisions in the first place.

We currently have a system in which power over the judicious and ethical use of data is overwhelmingly concentrated among white men. Research shows that the unconscious biases that emerge from a person’s upbringing and experiences can be baked into technology, resulting in negative consequences for minority groups.

(cut)

People noticed that Google Translate showed a tendency to assign feminine gender pronouns to certain jobs and masculine pronouns to others – “she is a babysitter” or “he is a doctor” – in a manner that reeked of sexism. Google Translate bases its decision about which gender to assign to a particular job on the training data it learns from. In this case, it’s picking up the gender bias that already exists in the world and feeding it back to us.

If we want to ensure that algorithms don’t perpetuate and reinforce existing biases, we need to be careful about the data we use to train algorithms. But if we hold the view that women are more likely to be babysitters and men are more likely to be doctors, then we might not even notice – and correct for – biased data in the tools we build.

So it matters who is writing the code because the code defines the algorithm, which makes the judgement on the basis of the data.

The information is here.

Monday, July 23, 2018

St. Cloud psychologist gets 3-plus years for sex with client

Nora G. Hertel
Saint Cloud Times 
Originally published June 14, 2018

Psychologist Eric Felsch will spend more than three years in prison for having sex with a patient in 2011.

Stearns County Judge Andrew Pearson sentenced Felsch Thursday to 41 months in prison for third-degree criminal sexual conduct, a felony. He pleaded guilty to the charge in April.

Felsch, 46, has a St. Cloud address.

It is against Minnesota law for a psychotherapist to have sex with a patient during or outside of a therapy session. A defendant facing that charge cannot defend himself by saying the victim consented to the sexual activity.

Sex with clients is also against ethical codes taught to psychologists.

The information is here.

A psychologist in Pennsylvania can face criminal charges for engaging in sexual relationships with a current patient.

Assessing the contextual stability of moral foundations: Evidence from a survey experiment

David Ciuk
Research and Politics
First Published June 20, 2018

Abstract

Moral foundations theory (MFT) claims that individuals use their intuitions on five “virtues” as guidelines for moral judgment, and recent research makes the case that these intuitions cause people to adopt important political attitudes, including partisanship and ideology. New work in political science, however, demonstrates not only that the causal effect of moral foundations on these political predispositions is weaker than once thought, but it also opens the door to the possibility that causality runs in the opposite direction—from political predispositions to moral foundations. In this manuscript, I build on this new work and test the extent to which partisan and ideological considerations cause individuals’ moral foundations to shift in predictable ways. The results show that while these group-based cues do exert some influence on moral foundations, the effects of outgroup cues are particularly strong. I conclude that small shifts in political context do cause MFT measures to move, and, to close, I discuss the need for continued theoretical development in MFT as well as an increased attention to measurement.

The research is here.

Sunday, July 22, 2018

Are free will believers nicer people? (Four studies suggest not)

Damien Crone and Neil Levy
Preprint
Created January 10, 2018

Abstract

Free will is widely considered a foundational component of Western moral and legal codes, and yet current conceptions of free will are widely thought to fit uncomfortably with much research in psychology and neuroscience. Recent research investigating the consequences of laypeople’s free will beliefs (FWBs) for everyday moral behavior suggest that stronger FWBs are associated with various desirable moral characteristics (e.g., greater helpfulness, less dishonesty). These findings have sparked concern regarding the potential for moral degeneration throughout society as science promotes a view of human behavior that is widely perceived to undermine the notion of free will. We report four studies (combined N =921) originally concerned with possible mediators and/or moderators of the abovementioned associations. Unexpectedly, we found no association between FWBs and moral behavior. Our findings suggest that the FWB – moral behavior association (and accompanying concerns regarding decreases in FWBs causing moral degeneration) may be overstated.

The research is here.

Saturday, July 21, 2018

Bias detectives: the researchers striving to make algorithms fair

Rachel Courtland
Nature.com
Originally posted

Here is an excerpt:

“What concerns me most is the idea that we’re coming up with systems that are supposed to ameliorate problems [but] that might end up exacerbating them,” says Kate Crawford, co-founder of the AI Now Institute, a research centre at New York University that studies the social implications of artificial intelligence.

With Crawford and others waving red flags, governments are trying to make software more accountable. Last December, the New York City Council passed a bill to set up a task force that will recommend how to publicly share information about algorithms and investigate them for bias. This year, France’s president, Emmanuel Macron, has said that the country will make all algorithms used by its government open. And in guidance issued this month, the UK government called for those working with data in the public sector to be transparent and accountable. Europe’s General Data Protection Regulation (GDPR), which came into force at the end of May, is also expected to promote algorithmic accountability.

In the midst of such activity, scientists are confronting complex questions about what it means to make an algorithm fair. Researchers such as Vaithianathan, who work with public agencies to try to build responsible and effective software, must grapple with how automated tools might introduce bias or entrench existing inequity — especially if they are being inserted into an already discriminatory social system.

The information is here.

Friday, July 20, 2018

How to Look Away

Megan Garber
The Atlantic
Originally published June 20, 2018

Here is an excerpt:

It is a dynamic—the democratic alchemy that converts seeing things into changing them—that the president and his surrogates have been objecting to, as they have defended their policy. They have been, this week (with notable absences), busily appearing on cable-news shows and giving disembodied quotes to news outlets, insisting that things aren’t as bad as they seem: that the images and the audio and the evidence are wrong not merely ontologically, but also emotionally. Don’t be duped, they are telling Americans. Your horror is incorrect. The tragedy is false. Your outrage about it, therefore, is false. Because, actually, the truth is so much more complicated than your easy emotions will allow you to believe. Actually, as Fox News host Laura Ingraham insists, the holding pens that seem to house horrors are “essentially summer camps.” And actually, as Fox & Friends’ Steve Doocy instructs, the pens are not cages so much as “walls” that have merely been “built … out of chain-link fences.” And actually, Kirstjen Nielsen wants you to remember, “We provide food, medical, education, all needs that the child requests.” And actually, too—do not be fooled by your own empathy, Tom Cotton warns—think of the child-smuggling. And of MS-13. And of sexual assault. And of soccer fields. There are so many reasons to look away, so many other situations more deserving of your outrage and your horror.

It is a neat rhetorical trick: the logic of not in my backyard, invoked not merely despite the fact that it is happening in our backyard, but because of it. With seed and sod that we ourselves have planted.

Yes, yes, there are tiny hands, reaching out for people who are not there … but those are not the point, these arguments insist and assure. To focus on those images—instead of seeing the system, a term that Nielsen and even Trump, a man not typically inclined to think in networked terms, have been invoking this week—is to miss the larger point.

The article is here.