Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Saturday, July 28, 2018

Costs, needs, and integration efforts shape helping behavior toward refugees

Robert Böhm, Maik M. P. Theelen, Hannes Rusch, and Paul A. M. Van Lange
PNAS June 25, 2018. 201805601; published ahead of print June 25, 2018

Abstract

Recent political instabilities and conflicts around the world have drastically increased the number of people seeking refuge. The challenges associated with the large number of arriving refugees have revealed a deep divide among the citizens of host countries: one group welcomes refugees, whereas another rejects them. Our research aim is to identify factors that help us understand host citizens’ (un)willingness to help refugees. We devise an economic game that captures the basic structural properties of the refugee situation. We use it to investigate both economic and psychological determinants of citizens’ prosocial behavior toward refugees. In three controlled laboratory studies, we find that helping refugees becomes less likely when it is individually costly to the citizens. At the same time, helping becomes more likely with the refugees’ neediness: helping increases when it prevents a loss rather than generates a gain for the refugees. Moreover, particularly citizens with higher degrees of prosocial orientation are willing to provide help at a personal cost. When refugees have to exert a minimum level of effort to be eligible for support by the citizens, these mandatory “integration efforts” further increase prosocial citizens’ willingness to help. Our results underscore that economic factors play a key role in shaping individual refugee helping behavior but also show that psychological factors modulate how individuals respond to them. Moreover, our economic game is a useful complement to correlational survey measures and can be used for pretesting policy measures aimed at promoting prosocial behavior toward refugees.

The research is here.

Friday, July 27, 2018

Morality in the Machines

Erick Trickery
Harvard Law Bulletin
Originally posted June 26, 2018

Here is an excerpt:

In February, the Harvard and MIT researchers endorsed a revised approach in the Massachusetts House’s criminal justice bill, which calls for a bail commission to study risk-assessment tools. In late March, the House-Senate conference committee included the more cautious approach in its reconciled criminal justice bill, which passed both houses and was signed into law by Gov. Charlie Baker in April.

Meanwhile, Harvard and MIT scholars are going still deeper into the issue. Bavitz and a team of Berkman Klein researchers are developing a database of governments that use risk scores to help set bail. It will be searchable to see whether court cases have challenged a risk-score tool’s use, whether that tool is based on peer-reviewed scientific literature, and whether its formulas are public.

Many risk-score tools are created by private companies that keep their algorithms secret. That lack of transparency creates due-process concerns, says Bavitz. “Flash forward to a world where a judge says, ‘The computer tells me you’re a risk score of 5 out of 7.’ What does it mean? I don’t know. There’s no opportunity for me to lift up the hood of the algorithm.” Instead, he suggests governments could design their own risk-assessment algorithms and software, using staff or by collaborating with foundations or researchers.

Students in the ethics class agreed that risk-score programs shouldn’t be used in court if their formulas aren’t transparent, according to then HLS 3L Arjun Adusumilli. “When people’s liberty interests are at stake, we really expect a certain amount of input, feedback and appealability,” he says. “Even if the thing is statistically great, and makes good decisions, we want reasons.”

The information is here.

Informed Consent and the Role of the Treating Physician

Holly Fernandez Lynch, Steven Joffe, and Eric A. Feldman
Originally posted June 21, 2018
N Engl J Med 2018; 378:2433-2438
DOI: 10.1056/NEJMhle1800071

Here are a few excerpts:

In 2017, the Pennsylvania Supreme Court ruled that informed consent must be obtained directly by the treating physician. The authors discuss the potential implications of this ruling and argue that a team-based approach to consent is better for patients and physicians.

(cut)

Implications in Pennsylvania and Beyond

Shinal has already had a profound effect in Pennsylvania, where it represents a substantial departure from typical consent practice.  More than half the physicians who responded to a recent survey conducted by the Pennsylvania Medical Society (PAMED) reported a change in the informed-consent process in their work setting; of that group, the vast majority expressed discontent with the effect of the new approach on patient flow and the way patients are served.  Medical centers throughout the state have changed their consent policies, precluding nonphysicians from obtaining patient consent to the procedures specified in the MCARE Act and sometimes restricting the involvement of physician trainees.  Some Pennsylvania institutions have also applied the Shinal holding to research, in light of the reference in the MCARE Act to experimental products and uses, despite the clear policy of the Food and Drug Administration (FDA) allowing investigators to involve other staff in the consent process.

(cut)

Selected State Informed-Consent Laws.

Although the Shinal decision is not binding outside of Pennsylvania, cases bearing on critical ethical dimensions of consent have a history of influence beyond their own jurisdictions.

The information is here.

Thursday, July 26, 2018

Virtuous technology

Mustafa Suleyman
medium.com
Originally published June 26, 2018

Hereis an excerpt:

There are at least three important asymmetries between the world of tech and the world itself. First, the asymmetry between people who develop technologies and the communities who use them. Salaries in Silicon Valley are twice the median wage for the rest of the US and the employee base is unrepresentative when it comes to gender, race, class and more. As we have seen in other fields, this risks a disconnect between the inner workings of organisations and the societies they seek to serve.

This is an urgent problem. Women and minority groups remain badly underrepresented, and leaders need to be proactive in breaking the mould. The recent spotlight on these issues has meant that more people are aware of the need for workplace cultures to change, but these underlying inequalities also make their way into our companies in more insidious ways. Technology is not value neutral — it reflects the biases of its creators — and must be built and shaped by diverse communities if we are to minimise the risk of unintended harms.

Second, there is an asymmetry of information regarding how technology actually works, and the impact that digital systems have on everyday life. Ethical outcomes in tech depend on far more than algorithms and data: they depend on the quality of societal debate and genuine accountability.

The information is here.

Number of Canadians choosing medically assisted death jumps 30%

Kathleen Harris
www.cbc.ca
Originally posted June 21, 2018

There were 1,523 medically assisted deaths in Canada in the last six-month reporting period — a nearly 30 per cent increase over the previous six months.

Cancer was the most common underlying medical condition in reported assisted death cases, cited in about 65 per cent of all medically assisted deaths, according to the report from Health Canada.

Using data from Statistics Canada, the report shows medically assisted deaths accounted for 1.07 per cent of all deaths in the country over those six months. That is consistent with reports from other countries that have assisted death regimes, where the figure ranges from 0.3 to four per cent.

The information is here.

Wednesday, July 25, 2018

Descartes was wrong: ‘a person is a person through other persons’

Abeba Birhane
aeon.com
Originally published April 7, 2017

Here is an excerpt:

So reality is not simply out there, waiting to be uncovered. ‘Truth is not born nor is it to be found inside the head of an individual person, it is born between people collectively searching for truth, in the process of their dialogic interaction,’ Bakhtin wrote in Problems of Dostoevsky’s Poetics (1929). Nothing simply is itself, outside the matrix of relationships in which it appears. Instead, being is an act or event that must happen in the space between the self and the world.

Accepting that others are vital to our self-perception is a corrective to the limitations of the Cartesian view. Consider two different models of child psychology. Jean Piaget’s theory of cognitive development conceives of individual growth in a Cartesian fashion, as the reorganisation of mental processes. The developing child is depicted as a lone learner – an inventive scientist, struggling independently to make sense of the world. By contrast, ‘dialogical’ theories, brought to life in experiments such as Lisa Freund’s ‘doll house study’ from 1990, emphasise interactions between the child and the adult who can provide ‘scaffolding’ for how she understands the world.

A grimmer example might be solitary confinement in prisons. The punishment was originally designed to encourage introspection: to turn the prisoner’s thoughts inward, to prompt her to reflect on her crimes, and to eventually help her return to society as a morally cleansed citizen. A perfect policy for the reform of Cartesian individuals.

The information is here.

Heuristics and Public Policy: Decision Making Under Bounded Rationality

Sanjit Dhami, Ali al-Nowaihi, and Cass Sunstein
SSRN.com
Posted June 20, 2018

Abstract

How do human beings make decisions when, as the evidence indicates, the assumptions of the Bayesian rationality approach in economics do not hold? Do human beings optimize, or can they? Several decades of research have shown that people possess a toolkit of heuristics to make decisions under certainty, risk, subjective uncertainty, and true uncertainty (or Knightian uncertainty). We outline recent advances in knowledge about the use of heuristics and departures from Bayesian rationality, with particular emphasis on growing formalization of those departures, which add necessary precision. We also explore the relationship between bounded rationality and libertarian paternalism, or nudges, and show that some recent objections, founded on psychological work on the usefulness of certain heuristics, are based on serious misunderstandings.

The article can be downloaded here.

Tuesday, July 24, 2018

Amazon, Google and Microsoft Employee AI Ethics Are Best Hope For Humanity

Paul Armstrong
Forbes.com
Originally posted June 26, 2018

Here is an excerpt:

Google recently lost the 'Don't be Evil' from its Code of Conduct documents but what were once guiding words now appear to be afterthoughts, and they aren't alone. From drone use to deals with the immigration services, large tech companies are looking to monetise their creations and who can blame them - projects can cost double digit millions as companies look to maintain an edge in a continually evolving marketplace. Employees are not without a conscience it seems, and as talent becomes the one thing that companies need in this war, that power needs to wielded, or we risk runaway train scenarios. If you want an idea of where things could go read this.

China is using AI software and facial recognition to determine who can travel, using what and where. You might think this is a ways away from being used on US or UK soil, but you'd be wrong. London has cameras on pretty much all streets, and the US has Amazon's Rekognition (Orlando just abandoned its use, but other tests remain active). Employees need to be the conscious of large entities and not only the ACLU or civil liberties inclined. From racist AI to faked video using machine learning to create better fakes, how you form technology matters as much as the why. Google has already mastered the technology to convince a human it is not talking to a robot thanks to um's and ah's - Google's next job is to convince us that is a good thing.

The information is here.

Data ethics is more than just what we do with data, it’s also about who’s doing it

James Arvantitakis, Andrew Francis, and Oliver Obst
The Conversation
Originally posted June 21, 2018

If the recent Cambridge Analytica data scandal has taught us anything, it’s that the ethical cultures of our largest tech firms need tougher scrutiny.

But moral questions about what data should be collected and how it should be used are only the beginning. They raise broader questions about who gets to make those decisions in the first place.

We currently have a system in which power over the judicious and ethical use of data is overwhelmingly concentrated among white men. Research shows that the unconscious biases that emerge from a person’s upbringing and experiences can be baked into technology, resulting in negative consequences for minority groups.

(cut)

People noticed that Google Translate showed a tendency to assign feminine gender pronouns to certain jobs and masculine pronouns to others – “she is a babysitter” or “he is a doctor” – in a manner that reeked of sexism. Google Translate bases its decision about which gender to assign to a particular job on the training data it learns from. In this case, it’s picking up the gender bias that already exists in the world and feeding it back to us.

If we want to ensure that algorithms don’t perpetuate and reinforce existing biases, we need to be careful about the data we use to train algorithms. But if we hold the view that women are more likely to be babysitters and men are more likely to be doctors, then we might not even notice – and correct for – biased data in the tools we build.

So it matters who is writing the code because the code defines the algorithm, which makes the judgement on the basis of the data.

The information is here.