Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Monday, December 3, 2018

Our lack of interest in data ethics will come back to haunt us

Jayson Demers
thenextweb.com
Originally posted November 4, 2018

Here is an excerpt:

Most people understand the privacy concerns that can arise with collecting and harnessing big data, but the ethical concerns run far deeper than that.

These are just a smattering of the ethical problems in big data:

  • Ownership: Who really “owns” your personal data, like what your political preferences are, or which types of products you’ve bought in the past? Is it you? Or is it public information? What about people under the age of 18? What about information you’ve tried to privatize?
  • Bias: Biases in algorithms can have potentially destructive effects. Everything from facial recognition to chatbots can be skewed to favor one demographic over another, or one set of values over another, based on the data used to power it.
  • Transparency: Are companies required to disclose how they collect and use data? Or are they free to hide some of their efforts? More importantly, who gets to decide the answer here?
  • Consent: What does it take to “consent” to having your data harvested? Is your passive use of a platform enough? What about agreeing to multi-page, complexly worded Terms and Conditions documents?

If you haven’t heard about these or haven’t thought much about them, I can’t really blame you. We aren’t bringing these questions to the forefront of the public discussion on big data, nor are big data companies going out of their way to discuss them before issuing new solutions.

“Oops, our bad”

One of the biggest problems we keep running into is what I call the “oops, our bad” effect. The idea here is that big companies and data scientists use and abuse data however they want, outside the public eye and without having an ethical discussion about their practices. If and when the public finds out that some egregious activity took place, there’s usually a short-lived public outcry, and the company issues an apology for the actions — without really changing their practices or making up for the damage.

The info is here.

Choosing victims: Human fungibility in moral decision-making

Michal Bialek, Jonathan Fugelsang, and Ori Friedman
Judgment and Decision Making, Vol 13, No. 5, pp. 451-457.

Abstract

In considering moral dilemmas, people often judge the acceptability of exchanging individuals’ interests, rights, and even lives. Here we investigate the related, but often overlooked, question of how people decide who to sacrifice in a moral dilemma. In three experiments (total N = 558), we provide evidence that these decisions often depend on the feeling that certain people are fungible and interchangeable with one another, and that one factor that leads people to be viewed this way is shared nationality. In Experiments 1 and 2, participants read vignettes in which three individuals’ lives could be saved by sacrificing another person. When the individuals were characterized by their nationalities, participants chose to save the three endangered people by sacrificing someone who shared their nationality, rather than sacrificing someone from a different nationality. Participants do not show similar preferences, though, when individuals were characterized by their age or month of birth. In Experiment 3, we replicated the effect of nationality on participant’s decisions about who to sacrifice, and also found that they did not show a comparable preference in a closely matched vignette in which lives were not exchanged. This suggests that the effect of nationality on decisions of who to sacrifice may be specific to judgments about exchanges of lives.

The research is here.

Sunday, December 2, 2018

Re-thinking Data Protection Law in the Age of Big Data and AI

Sandra Wachter and Brent Mittelstadt
Oxford Internet Institute
Originally posted October 11,2 018

Numerous applications of ‘Big Data analytics’ drawing potentially troubling inferences about individuals and groups have emerged in recent years.  Major internet platforms are behind many of the highest profile examples: Facebook may be able to infer protected attributes such as sexual orientation, race, as well as political opinions and imminent suicide attempts, while third parties have used Facebook data to decide on the eligibility for loans and infer political stances on abortion. Susceptibility to depression can similarly be inferred via usage data from Facebook and Twitter. Google has attempted to predict flu outbreaks as well as other diseases and their outcomes. Microsoft can likewise predict Parkinson’s disease and Alzheimer’s disease from search engine interactions. Other recent invasive applications include prediction of pregnancy by Target, assessment of users’ satisfaction based on mouse tracking, and China’s far reaching Social Credit Scoring system.

Inferences in the form of assumptions or predictions about future behaviour are often privacy-invasive, sometimes counterintuitive and, in any case, cannot be verified at the time of decision-making. While we are often unable to predict, understand or refute these inferences, they nonetheless impact on our private lives, identity, reputation, and self-determination.

These facts suggest that the greatest risks of Big Data analytics do not stem solely from how input data (name, age, email address) is used. Rather, it is the inferences that are drawn about us from the collected data, which determine how we, as data subjects, are being viewed and evaluated by third parties, that pose the greatest risk. It follows that protections designed to provide oversight and control over how data is collected and processed are not enough; rather, individuals require meaningful protection against not only the inputs, but the outputs of data processing.

Unfortunately, European data protection law and jurisprudence currently fails in this regard.

The info is here.

Saturday, December 1, 2018

Building trust by tearing others down: When accusing others of unethical behavior engenders trust

Jessica A. Kennedy, Maurice E. Schweitzer.
Organizational Behavior and Human Decision Processes
Volume 149, November 2018, Pages 111-128

Abstract

We demonstrate that accusations harm trust in targets, but boost trust in the accuser when the accusation signals that the accuser has high integrity. Compared to individuals who did not accuse targets of engaging in unethical behavior, accusers engendered greater trust when observers perceived the accusation to be motivated by a desire to defend moral norms, rather than by a desire to advance ulterior motives. We also found that the accuser’s moral hypocrisy, the accusation's revealed veracity, and the target’s intentions when committing the unethical act moderate the trust benefits conferred to accusers. Taken together, we find that accusations have important interpersonal consequences.

Highlights

•    Accusing others of unethical behavior can engender greater trust in an accuser.
•    Accusations can elevate trust by boosting perceptions of accusers’ integrity.
•    Accusations fail to build trust when they are perceived to reflect ulterior motives.
•    Morally hypocritical accusers and false accusations fail to build trust.
•    Accusations harm trust in the target.

The research is here.

Friday, November 30, 2018

To regulate AI we need new laws, not just a code of ethics

Paul Chadwick
The Guardian
Originally posted October 28, 2018

Here is an excerpt:

To Nemitz, “the absence of such framing for the internet economy has already led to a widespread culture of disregard of the law and put democracy in danger, the Facebook Cambridge Analytica scandal being only the latest wake-up call”.

Nemitz identifies four bases of digital power which create and then reinforce its unhealthy concentration in too few hands: lots of money, which means influence; control of “infrastructures of public discourse”; collection of personal data and profiling of people; and domination of investment in AI, most of it a “black box” not open to public scrutiny.

The key question is which of the challenges of AI “can be safely and with good conscience left to ethics” and which need law. Nemitz sees much that needs law.

In an argument both biting and sophisticated, Nemitz sketches a regulatory framework for AI that will seem to some like the GDPR on steroids.

Among several large claims, Nemitz argues that “not regulating these all pervasive and often decisive technologies by law would effectively amount to the end of democracy. Democracy cannot abdicate, and in particular not in times when it is under pressure from populists and dictatorships.”

The info is here.

The Knobe Effect From the Perspective of Normative Orders

Andrzej Waleszczyński, Michał Obidziński, & Julia Rejewska
Studia Humana Volume 7:4 (2018), pp. 9—15

Abstract:

The characteristic asymmetry in the attribution of intentionality in causing side effects, known as the Knobe effect, is considered to be a stable model of human cognition. This article looks at whether the way of thinking and analysing one scenario may affect the other and whether the mutual relationship between the ways in which both scenarios are analysed may affect the stability of the Knobe effect. The theoretical analyses and empirical studies performed are based on a distinction between moral and non-moral normativity possibly affecting the judgments passed in both scenarios. Therefore, an essential role in judgments about the intentionality of causing a side effect could be played by normative competences responsible for distinguishing between normative orders.

The research is here.

Thursday, November 29, 2018

Ethical Free Riding: When Honest People Find Dishonest Partners

Jörg Gross, Margarita Leib, Theo Offerman, & Shaul Shalvi
Psychological Science
https://doi.org/10.1177/0956797618796480

Abstract

Corruption is often the product of coordinated rule violations. Here, we investigated how such corrupt collaboration emerges and spreads when people can choose their partners versus when they cannot. Participants were assigned a partner and could increase their payoff by coordinated lying. After several interactions, they were either free to choose whether to stay with or switch their partner or forced to stay with or switch their partner. Results reveal that both dishonest and honest people exploit the freedom to choose a partner. Dishonest people seek a partner who will also lie—a “partner in crime.” Honest people, by contrast, engage in ethical free riding: They refrain from lying but also from leaving dishonest partners, taking advantage of their partners’ lies. We conclude that to curb collaborative corruption, relying on people’s honesty is insufficient. Encouraging honest individuals not to engage in ethical free riding is essential.

Conclusion
The freedom to select partners is important for the establishment of trust and cooperation. As we show here, however, it is also associated with potential moral hazards. For individuals who seek to keep the risk of collusion low, policies providing the freedom to choose one’s partners should be implemented with caution. Relying on people’s honesty may not always be sufficient because honest people may be willing to tolerate others’ rule violations if they stand to profit from them. Our results clarify yet again that people who are not willing to turn a blind eye and stand up to corruption should receive all praise.

Does AI Ethics Need to be More Inclusive?

Patrick Lin
Forbes.com
Originally posted October 29, 2018

Here is an excerpt:

Ethics is more than a survey of opinions

First, as the study’s authors allude to in their Nature paper and elsewhere, public attitudes don’t dictate what’s ethical or not.  People believe all kinds of crazy things—such as that slavery should be permitted—but that doesn’t mean those ethical beliefs are true or have any weight.  So, capturing responses of more people doesn’t necessarily help figure out what’s ethical or not.  Sometimes, more is just more, not better or even helpful.

This is the difference between descriptive ethics and normative ethics.  The former is more like sociology that simply seeks to describe what people believe, while the latter is more like philosophy that seeks reasons for why a belief may be justified (or not) and how things ought to be.

Dr. Edmond Awad, lead author of the Nature paper, cautioned, “What we are trying to show here is descriptive ethics: peoples’ preferences in ethical decisions.  But when it comes to normative ethics, which is how things should be done, that should be left to experts.”

Nonetheless, public attitudes are a necessary ingredient in practical policymaking, which should aim at the ethical but doesn’t always hit that mark.  If expert judgments in ethics diverge too much from public attitudes—asking more from a population than what they’re willing to agree to—that’s a problem for implementing the policy, and a resolution is needed.

The info is here.

Wednesday, November 28, 2018

Why good businesspeople do bad things

Joseph Holt
The Chicago Tribune
Originally posted October 30, 2018

Here is an excerpt:

Businesspeople are also more likely to engage in bad behavior if they assume that their competitors are doing so and that they will be at a competitive disadvantage if they do not.

A 2006 study showed that MBA students in the U.S. and Canada were more likely to cheat than other graduate students. One of the authors of the study, Donald McCabe, explained in an article that the cheating was a result of MBA students’ “succeed-at-all-costs mentality” and the belief that they were acting the way they believed they needed to act to succeed in the corporate world.

Casey Donnelly, Gatto’s attorney, claimed in her opening statement at the trial that “every major apparel company” engaged in the same payment practice, and that her client was simply attempting to “level the playing field.”

Federal authorities engaged in a yearslong investigation of shadowy dealings involving shoe companies, sports agents, college coaches and top high school basketball players have reportedly looked into Nike and Under Armour as well as Adidas.

Time will tell whether those companies were involved in similar payment schemes.

The info is here.