Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Wednesday, December 5, 2018

Toward a psychology of Homo sapiens: Making psychological science more representative of the human population

Mostafa Salari Rad, Alison Jane Martingano, and Jeremy Ginges
PNAS November 6, 2018 115 (45) 11401-11405; published ahead of print November 6, 2018 https://doi.org/10.1073/pnas.1721165115

Abstract

Two primary goals of psychological science should be to understand what aspects of human psychology are universal and the way that context and culture produce variability. This requires that we take into account the importance of culture and context in the way that we write our papers and in the types of populations that we sample. However, most research published in our leading journals has relied on sampling WEIRD (Western, educated, industrialized, rich, and democratic) populations. One might expect that our scholarly work and editorial choices would by now reflect the knowledge that Western populations may not be representative of humans generally with respect to any given psychological phenomenon. However, as we show here, almost all research published by one of our leading journals, Psychological Science, relies on Western samples and uses these data in an unreflective way to make inferences about humans in general. To take us forward, we offer a set of concrete proposals for authors, journal editors, and reviewers that may lead to a psychological science that is more representative of the human condition.

Georgia Tech has had a ‘dramatic increase’ in ethics complaints, president says

Eric Stirgus
The Atlantic Journal-Constitution
Originally published November 6, 2018

Here is an excerpt:

The Atlanta Journal-Constitution reported in September Georgia Tech is often slow in completing ethics investigations. Georgia Tech took an average of 102 days last year to investigate a complaint, the second-longest time of any college or university in the University System of Georgia, according to a report presented in April to the state’s Board of Regents. Savannah State University had the longest average time, 135 days.

Tuesday’s meeting is the kick-off to more than a week’s worth of discussions at Tech to improve its ethics culture. University System of Georgia Chancellor Steve Wrigley ordered Georgia Tech to update him on what officials there are doing to improve after reports found problems such as a top official who was a paid board member of a German-based company that had contracts with Tech. Peterson’s next update is due Monday.

A few employees told Peterson they’re concerned that many administrators are now afraid to make decisions and asked the president what’s being done to address that. Peterson acknowledged “there’s some anxiety on campus” and asked employees to “embrace each other” as they work through what he described as an embarrassing chapter in the school’s history.

The info is here.

Tuesday, December 4, 2018

Letting tech firms frame the AI ethics debate is a mistake

Robert Hart
www.fastcompany.com
Originally posted November 2, 2018

Here is an excerpt:

Even many ethics-focused panel discussions–or manel discussions, as some call them–are pale, male, and stale. That is to say, they are made up predominantly of old, white, straight, and wealthy men. Yet these discussions are meant to be guiding lights for AI technologies that affect everyone.

A historical illustration is useful here. Consider polio, a disease that was declared global public health enemy number one after the successful eradication of smallpox decades ago. The “global” part is important. Although the movement to eradicate polio was launched by the World Health Assembly, the decision-making body of the United Nations’ World Health Organization, the eradication campaign was spearheaded primarily by groups in the U.S. and similarly wealthy countries. Promulgated with intense international pressure, the campaign distorted local health priorities in many parts of the developing world.

It’s not that the developing countries wanted their citizens to contract polio. Of course, they didn’t. It’s just that they would have rather spent the significant sums of money on more pressing local problems. In essence, one wealthy country imposed their own moral judgement on the rest of the world, with little forethought about the potential unintended consequences. The voices of a few in the West grew to dominate and overpower those elsewhere–a kind of ethical colonialism, if you will.

The info is here.

Document ‘informed refusal’ just as you would informed consent

James Scibilia
AAP News
Originally posted October 20, 2018

Here is an excerpt:

The requirements of informed refusal are the same as informed consent. Providers must explain:

  • the proposed treatment or testing;
  • the risks and benefits of refusal;
  • anticipated outcome with and without treatment; and
  • alternative therapies, if available.

Documentation of this discussion, including all four components, in the medical record is critical to mounting a successful defense from a claim that you failed to warn about the consequences of refusing care.

Since state laws vary, it is good practice to check with your malpractice carrier about preferred risk management documentation. Generally, the facts of these discussions should be included and signed by the caretaker. This conversation and documentation should not be delegated to other members of the health care team. At least one state has affirmed through a Supreme Court decision that informed consent must be obtained by the provider performing the procedure and not another team member; it is likely the concept of informed refusal would bear the same requirements.

The info is here.

Monday, December 3, 2018

Our lack of interest in data ethics will come back to haunt us

Jayson Demers
thenextweb.com
Originally posted November 4, 2018

Here is an excerpt:

Most people understand the privacy concerns that can arise with collecting and harnessing big data, but the ethical concerns run far deeper than that.

These are just a smattering of the ethical problems in big data:

  • Ownership: Who really “owns” your personal data, like what your political preferences are, or which types of products you’ve bought in the past? Is it you? Or is it public information? What about people under the age of 18? What about information you’ve tried to privatize?
  • Bias: Biases in algorithms can have potentially destructive effects. Everything from facial recognition to chatbots can be skewed to favor one demographic over another, or one set of values over another, based on the data used to power it.
  • Transparency: Are companies required to disclose how they collect and use data? Or are they free to hide some of their efforts? More importantly, who gets to decide the answer here?
  • Consent: What does it take to “consent” to having your data harvested? Is your passive use of a platform enough? What about agreeing to multi-page, complexly worded Terms and Conditions documents?

If you haven’t heard about these or haven’t thought much about them, I can’t really blame you. We aren’t bringing these questions to the forefront of the public discussion on big data, nor are big data companies going out of their way to discuss them before issuing new solutions.

“Oops, our bad”

One of the biggest problems we keep running into is what I call the “oops, our bad” effect. The idea here is that big companies and data scientists use and abuse data however they want, outside the public eye and without having an ethical discussion about their practices. If and when the public finds out that some egregious activity took place, there’s usually a short-lived public outcry, and the company issues an apology for the actions — without really changing their practices or making up for the damage.

The info is here.

Choosing victims: Human fungibility in moral decision-making

Michal Bialek, Jonathan Fugelsang, and Ori Friedman
Judgment and Decision Making, Vol 13, No. 5, pp. 451-457.

Abstract

In considering moral dilemmas, people often judge the acceptability of exchanging individuals’ interests, rights, and even lives. Here we investigate the related, but often overlooked, question of how people decide who to sacrifice in a moral dilemma. In three experiments (total N = 558), we provide evidence that these decisions often depend on the feeling that certain people are fungible and interchangeable with one another, and that one factor that leads people to be viewed this way is shared nationality. In Experiments 1 and 2, participants read vignettes in which three individuals’ lives could be saved by sacrificing another person. When the individuals were characterized by their nationalities, participants chose to save the three endangered people by sacrificing someone who shared their nationality, rather than sacrificing someone from a different nationality. Participants do not show similar preferences, though, when individuals were characterized by their age or month of birth. In Experiment 3, we replicated the effect of nationality on participant’s decisions about who to sacrifice, and also found that they did not show a comparable preference in a closely matched vignette in which lives were not exchanged. This suggests that the effect of nationality on decisions of who to sacrifice may be specific to judgments about exchanges of lives.

The research is here.

Sunday, December 2, 2018

Re-thinking Data Protection Law in the Age of Big Data and AI

Sandra Wachter and Brent Mittelstadt
Oxford Internet Institute
Originally posted October 11,2 018

Numerous applications of ‘Big Data analytics’ drawing potentially troubling inferences about individuals and groups have emerged in recent years.  Major internet platforms are behind many of the highest profile examples: Facebook may be able to infer protected attributes such as sexual orientation, race, as well as political opinions and imminent suicide attempts, while third parties have used Facebook data to decide on the eligibility for loans and infer political stances on abortion. Susceptibility to depression can similarly be inferred via usage data from Facebook and Twitter. Google has attempted to predict flu outbreaks as well as other diseases and their outcomes. Microsoft can likewise predict Parkinson’s disease and Alzheimer’s disease from search engine interactions. Other recent invasive applications include prediction of pregnancy by Target, assessment of users’ satisfaction based on mouse tracking, and China’s far reaching Social Credit Scoring system.

Inferences in the form of assumptions or predictions about future behaviour are often privacy-invasive, sometimes counterintuitive and, in any case, cannot be verified at the time of decision-making. While we are often unable to predict, understand or refute these inferences, they nonetheless impact on our private lives, identity, reputation, and self-determination.

These facts suggest that the greatest risks of Big Data analytics do not stem solely from how input data (name, age, email address) is used. Rather, it is the inferences that are drawn about us from the collected data, which determine how we, as data subjects, are being viewed and evaluated by third parties, that pose the greatest risk. It follows that protections designed to provide oversight and control over how data is collected and processed are not enough; rather, individuals require meaningful protection against not only the inputs, but the outputs of data processing.

Unfortunately, European data protection law and jurisprudence currently fails in this regard.

The info is here.

Saturday, December 1, 2018

Building trust by tearing others down: When accusing others of unethical behavior engenders trust

Jessica A. Kennedy, Maurice E. Schweitzer.
Organizational Behavior and Human Decision Processes
Volume 149, November 2018, Pages 111-128

Abstract

We demonstrate that accusations harm trust in targets, but boost trust in the accuser when the accusation signals that the accuser has high integrity. Compared to individuals who did not accuse targets of engaging in unethical behavior, accusers engendered greater trust when observers perceived the accusation to be motivated by a desire to defend moral norms, rather than by a desire to advance ulterior motives. We also found that the accuser’s moral hypocrisy, the accusation's revealed veracity, and the target’s intentions when committing the unethical act moderate the trust benefits conferred to accusers. Taken together, we find that accusations have important interpersonal consequences.

Highlights

•    Accusing others of unethical behavior can engender greater trust in an accuser.
•    Accusations can elevate trust by boosting perceptions of accusers’ integrity.
•    Accusations fail to build trust when they are perceived to reflect ulterior motives.
•    Morally hypocritical accusers and false accusations fail to build trust.
•    Accusations harm trust in the target.

The research is here.

Friday, November 30, 2018

To regulate AI we need new laws, not just a code of ethics

Paul Chadwick
The Guardian
Originally posted October 28, 2018

Here is an excerpt:

To Nemitz, “the absence of such framing for the internet economy has already led to a widespread culture of disregard of the law and put democracy in danger, the Facebook Cambridge Analytica scandal being only the latest wake-up call”.

Nemitz identifies four bases of digital power which create and then reinforce its unhealthy concentration in too few hands: lots of money, which means influence; control of “infrastructures of public discourse”; collection of personal data and profiling of people; and domination of investment in AI, most of it a “black box” not open to public scrutiny.

The key question is which of the challenges of AI “can be safely and with good conscience left to ethics” and which need law. Nemitz sees much that needs law.

In an argument both biting and sophisticated, Nemitz sketches a regulatory framework for AI that will seem to some like the GDPR on steroids.

Among several large claims, Nemitz argues that “not regulating these all pervasive and often decisive technologies by law would effectively amount to the end of democracy. Democracy cannot abdicate, and in particular not in times when it is under pressure from populists and dictatorships.”

The info is here.