Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Wednesday, August 15, 2018

Four Rules for Learning How to Talk To Each Other Again

Jason Pontin
www.wired.com
Originally posted

Here is an excerpt:

Here’s how to speak in a polity where we loathe each other. Let this be the Law of Parsimonious Claims:

1. Say nothing you know to be untrue, whether to deceive, confuse, or, worst of all, encourage a wearied cynicism.

2. Make mostly falsifiable assertions or offer prescriptions whose outcomes could be measured, always explaining how your assertion or prescription could be tested.

3. Whereof you have no evidence but possess only moral intuitions, say so candidly, and accept you must coexist with people who have different intuitions.

4. When evidence proves you wrong, admit it cheerfully, pleased that your mistake has contributed to the general progress.

Finally, as you listen, assume the good faith of your opponents, unless you have proof otherwise. Judge their assertions and prescriptions based on the plain meaning of their words, rather on than what you guess to be their motives. Often, people will tell you about experiences they found significant. If they are earnest, hear them sympathetically.

The info is here.

Thinking about Karma and God reduces believers’ selfishness in anonymous dictator games

Cindel White John Kelly Azim Shariff Ara Norenzayan
Preprint
Originally posted on June 23, 2018

Abstract

In a novel supernatural framing paradigm, three repeated-measures experiments (N = 2347) examined whether thinking about Karma and God increases generosity in anonymous dictator games. We found that (1) thinking about Karma increased generosity in karmic believers across religious affiliations, including Hindus, Buddhists, Christians, and non-religious Americans; (2) thinking about God also increased generosity among believers in God (but not among non-believers), replicating previous findings; and (3) thinking about both Karma and God shifted participants’ initially selfish offers towards fairness, but had no effect on already fair offers. Contrary to hypotheses, ratings of supernatural punitiveness did not predict greater generosity. These supernatural framing effects were obtained and replicated in high-powered, pre-registered experiments and remained robust to several methodological checks, including hypothesis guessing, game familiarity, demographic variables, and variation in data exclusion criteria.

Tuesday, August 14, 2018

Natural-born existentialists

Ronnie de Sousa
aeon.com
Originally posted December 10, 2017

Here are two excerpts:

Much the same might be true of some of the emotional dispositions bequeathed to us by natural selection. If we follow some evolutionary psychologists in thinking that evolution has programmed us to value solidarity and authority, for example, we must recognise that those very same mechanisms promote xenophobia, racism and fascism. Some philosophers have made much of the fact that we appear to have genuinely altruistic motives: sometimes, human beings actually sacrifice themselves for complete strangers. If that is indeed a native human trait, so much the better. But it can’t be good because it’s natural. For selfishness and cruelty are no less natural. Again, naturalness can’t reasonably be why we value what we care about.

A second reason why evolution is not providence is that any given heritable trait is not simply either ‘adaptive’ or ‘maladaptive’ for the species. Some cases of fitness are frequency-dependent, which means that certain traits acquire a stable distribution in a population only if they are not universal.

(cut)

The third reason we should not equate the natural with the good is the most important. Evolution is not about us. In repeating the well-worn phrase that is supposed to sum up natural selection, ‘survival of the fittest’, we seldom think to ask: the fittest what? It won’t do to think that the phrase refers to fitness in individuals such as you and me. Even the fittest individuals never survive at all. We all die. What does survive is best described as information, much of which is encoded in the genes. That remains true despite the fashionable preoccupation with ‘epigenetic’ or otherwise non-DNA-encoded factors. The point is that ‘the fittest’ refers to just whatever gets replicated in subsequent generations – and whatever that is, it isn’t us. Every human is radically new, and – at least until cloning becomes routine – none will ever recur.

The article is here.

The developmental and cultural psychology of free will

Tamar Kushnir
Philosophy Compass
Originally published July 12, 2018

Abstract

This paper provides an account of the developmental origins of our belief in free will based on research from a range of ages—infants, preschoolers, older children, and adults—and across cultures. The foundations of free will beliefs are in infants' understanding of intentional action—their ability to use context to infer when agents are free to “do otherwise” and when they are constrained. In early childhood, new knowledge about causes of action leads to new abilities to imagine constraints on action. Moreover, unlike adults, young children tend to view psychological causes (i.e., desires) and social causes (i.e., following rules or group norms, being kind or fair) of action as constraints on free will. But these beliefs change, and also diverge across cultures, corresponding to differences between Eastern and Western philosophies of mind, self, and action. Finally, new evidence shows developmentally early, culturally dependent links between free will beliefs and behavior, in particular when choice‐making requires self‐control.

Here is part of the Conclusion:

I've argued here that free will beliefs are early‐developing and culturally universal, and that the folk psychology of free will involves considering actions in the context of alternative possibilities and constraints on possibility. There are developmental differences in how children reason about the possibility of acting against desires, and there are both developmental and cultural differences in how children consider the social and moral limitations on possibility.  Finally, there is new evidence emerging for developmentally early, culturally moderated links between free will beliefs and willpower, delay of gratification, and self‐regulation.

The article is here.

Monday, August 13, 2018

This AI Just Beat Human Doctors On A Clinical Exam

Parmy Olson
Forbes.com
Originally posted June 28, 2018

Here is an excerpt:

Now Parsa is bringing his software service and virtual doctor network to insurers in the U.S. His pitch is that the smarter and more “reassuring” his AI-powered chatbot gets, the more likely patients across the Atlantic are to resolve their issues with software alone.

It’s a model that could save providers millions, potentially, but Parsa has yet to secure a big-name American customer.

“The American market is much more tuned to the economics of healthcare,” he said from his office. “We’re talking to everyone: insurers, employers, health systems. They have massive gaps in delivery of the care.”

“We will set up physical and virtual clinics, and AI services in the United States,” he said, adding that Babylon would be operational with U.S. clinics in 2019, starting state by state. “For a fixed fee, we take total responsibility for the cost of primary care.”

Parsa isn’t shy about his transatlantic ambitions: “I think the U.S. will be our biggest market shortly,” he adds.

The info is here.

We Need Data Ethics, Not Just Laws

Chad Wollen
www.adexchanger.com
Originally posted July 18, 2018

With consumer trust in brands at such a low point it is worth reflecting on a home truth: Companies neglected to put the producers of “big data” – consumers – at the heart of their approach to data.

The General Data Protection Regulation (GDPR), the latest step toward building a more transparent relationship between companies and the public, represents a boiling over of concerns about data use.

Far too often, the general attitude appears to be one of companies gaming to see how rules can be observed without the overall spirit being fully embraced. The next, and arguably bigger, step is the ethical one, because it is about companies acting on their own, not because they are ordered to, to do the right thing.

What is clearly needed is a set of ethics that steer what companies can do with data – a set of truths that the industry holds to be self-evident.

In April, the industry was given a good starting point with the World Federation of Advertisers’ call for companies to adhere to a four-point plan to improve data policies. If meeting consumer needs is not enough to drive change then maybe meeting the needs of advertisers will be, and data safety initiatives can follow other moves within the industry to address issues around brand safety and accountability.

The info is here.

Sunday, August 12, 2018

Evolutionary Origins of Morality: Insights From Non-human Primates

Judith Burkart, Rahel Brugger, and Carel van Schaik
Front. Sociol., 09 July 2018

The aim of this contribution is to explore the origins of moral behavior and its underlying moral preferences and intuitions from an evolutionary perspective. Such a perspective encompasses both the ultimate, adaptive function of morality in our own species, as well as the phylogenetic distribution of morality and its key elements across primates. First, with regard to the ultimate function, we argue that human moral preferences are best construed as adaptations to the affordances of the fundamentally interdependent hunter-gatherer lifestyle of our hominin ancestors. Second, with regard to the phylogenetic origin, we show that even though full-blown human morality is unique to humans, several of its key elements are not. Furthermore, a review of evidence from non-human primates regarding prosocial concern, conformity, and the potential presence of universal, biologically anchored and arbitrary cultural norms shows that these elements of morality are not distributed evenly across primate species. This suggests that they have evolved along separate evolutionary trajectories. In particular, the element of prosocial concern most likely evolved in the context of shared infant care, which can be found in humans and some New World monkeys. Strikingly, many if not all of the elements of morality found in non-human primates are only evident in individualistic or dyadic contexts, but not as third-party reactions by truly uninvolved bystanders. We discuss several potential explanations for the unique presence of a systematic third-party perspective in humans, but focus particularly on mentalizing ability and language. Whereas both play an important role in present day, full-blown human morality, it appears unlikely that they played a causal role for the original emergence of morality. Rather, we suggest that the most plausible scenario to date is that human morality emerged because our hominid ancestors, equipped on the one hand with large and powerful brains inherited from their ape-like ancestor, and on the other hand with strong prosocial concern as a result of cooperative breeding, could evolve into an ever more interdependent social niche.

The article is here.

Saturday, August 11, 2018

Should we care that the sex robots are coming?

Kate Devlin
unhurd.com
Originally published July 12, 2018

Here is an excerpt:

There’s no evidence to suggest that human-human relationships will be damaged. Indeed, it may be a chance for people to experience feelings of love that they are otherwise denied, for any number of reasons. Whether or not that love is considered valid by society is a different matter. And while objectification is definitely an issue, it may be an avoidable one. Security and privacy breaches are a worry in any smart technologies, which puts a whole new spin on safe sex.

As for child sex robots – an abhorrent image – people have already been convicted for importing child-like sex dolls. But we shouldn’t shy from considering whether research might deem them useful in a clinical setting, such as testing rehabilitation success, as has been trialled with virtual reality.

While non-sexual care robots are already in use, it was only three months ago, that the race to produce the first commercially-available model was won by an lifeless sex doll with an animatronic head and an integrated AI chatbot called Harmony. She might look the part but she doesn’t move from the neck down. We are still a long way from Westworld.

Naturally, a niche market will be delighted at the prospect of bespoke robot pleasure to come. But many others are worried about the impact these machines will have on our own, human relationships. These concerns aren’t dispelled by the fact that the current form of the sex robot is a reductive, cartoonish stereotype of a woman: all big hair and bigger breasts.

The info is here.

Friday, August 10, 2018

Is compassion fatigue inevitable in an age of 24-hour news?

Elisa Gabbert
The Guardian
Originally posted August 2, 2018

Here is an excerpt:

Not long after compassion fatigue emerged as a concept in healthcare, a similar concept began to appear in media studies – the idea that overexposure to horrific images, from news reports in particular, could cause viewers to shut down emotionally, rejecting information instead of responding to it. In her 1999 book  Compassion Fatigue: How the Media Sell Disease, Famine, War and Death, the journalist and scholar Susan Moeller explored this idea at length. “It seems as if the media careen from one trauma to another, in a breathless tour of poverty, disease and death,” she wrote. “The troubles blur. Crises become one crisis.” The volume of bad news drives the public to “collapse into a compassion fatigue stupor”.

Susan Sontag grappled with similar questions in her short book Regarding the Pain of Others, published in 2003. By “regarding” she meant not just “with regard to”, but looking at: “Flooded with images of the sort that once used to shock and arouse indignation, we are losing our capacity to react. Compassion, stretched to its limits, is going numb. So runs the familiar diagnosis.” She implies that the idea was already tired: media overload dulls our sensitivity to suffering. Whose fault is that – ours or the media’s? And what are we supposed to do about it?

By Moeller’s account, compassion fatigue is a vicious cycle. When war and famine are constant, they become boring – we’ve seen it all before. The only way to break through your audience’s boredom is to make each disaster feel worse than the last. When it comes to world news, the events must be “more dramatic and violent” to compete with more local stories, as a 1995 study of international media coverage by the Pew Research Center in Washington found.

The information is here.

SAS officers given lessons in ‘morality’

Paul Maley
PM Malcolm Turnbull with Defence Minister Marise Payne and current Chief of the Defence Force Air Chief Marshal Mark Binskin. Picture: Kym SmithThe Australian
Originally posted July 9, 2018

SAS officers are being given ­additional training in ethics, ­morality and courage in leadership as the army braces itself for a potentially damning report ­expected to find that a small number of troops may have committed war crimes during the decade-long fight in Afghanistan.

With the Inspector-General of the Australian Defence Force due within months to hand down his report into ­alleged battlefield atrocities committed by Diggers, The Australian can reveal that the SAS Regiment has been quietly instituting a series of reforms ahead of the findings.

The changes to special forces training reflect a widely held view within the army that any alleged misconduct committed by Australian troops was in part the ­result of a failure of leadership, as well as the transgression of individual soldiers.

Many of the reforms are ­focused on strengthening operational leadership and regimental culture, while others are designed to help special operations officers make ethical ­decisions even under the most challenging conditions.

Thursday, August 9, 2018

The influence of moral stories on kindergarteners’ sharing behaviour

Zhuojun Yao and Robert Enright
Early Child Development and Care
July 19, 2018

Abstract

The current study investigated the effect of moral stories in promoting kindergarteners’ sharing behaviour. One hundred eight children were randomly assigned to one of three conditions: two experimental conditions (a moral story with a sharing model and good consequences and a moral story with a selfish model and bad consequences) and a control condition (a nonmoral story). The results showed that children in the experimental groups shared more than children in the control group. In addition, comparing the two experimental groups, children in the sharing-good consequences condition shared more than children in the selfish-bad consequences condition. Further, interviews were conducted to provide in-depth understanding of common and different influences of the two moral stories on children’s sharing behaviour. The implications for research and practice were discussed.

The article is here.

Why is suicide on the rise in the US – but falling in most of Europe?

Steven Stack
The Conversation
Originally published June 28, 2018

Here is an excerpt:

There is evidence that rising suicide rates are associated with a weakening of the social norms regarding mutual aid and support.

In one study on suicide in the U.S., the rising rates were closely linked with reductions in social welfare spending between 1960 and 1995. Social welfare expenditures include Medicaid, a medical assistance program for low income persons; Temporary Assistance for Needy Families, which replaced Aid to Families with Dependent Children; the Supplemental Security Income program for the blind, disabled and elderly; children’s services including adoption, foster care and day care; shelters; and funding of public hospitals for medical assistance other than Medicaid.

Later studies found a similar relationship between suicide and social welfare for the U.S. in the 1980s and between 1990 and 2000, as well as for nations in the Organization for Economic Cooperation and Economic Development.

When it comes to spending on social welfare, the U.S. is at the low end of the spectrum relative to Western Europe. For example, only 18.8 percent of the U.S. GDP is spent on social welfare, while most of the OECD nations spend at least 25 percent of their GDP. Our rates of suicide are increasing while their rates fall.

The information is here.

Wednesday, August 8, 2018

Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates

Marshall Allen
ProPublica.org
Originally posted July 17, 2018

Here are two excerpts:

With little public scrutiny, the health insurance industry has joined forces with data brokers to vacuum up personal details about hundreds of millions of Americans, including, odds are, many readers of this story. The companies are tracking your race, education level, TV habits, marital status, net worth. They’re collecting what you post on social media, whether you’re behind on your bills, what you order online. Then they feed this information into complicated computer algorithms that spit out predictions about how much your health care could cost them.

(cut)

At a time when every week brings a new privacy scandal and worries abound about the misuse of personal information, patient advocates and privacy scholars say the insurance industry’s data gathering runs counter to its touted, and federally required, allegiance to patients’ medical privacy. The Health Insurance Portability and Accountability Act, or HIPAA, only protects medical information.

“We have a health privacy machine that’s in crisis,” said Frank Pasquale, a professor at the University of Maryland Carey School of Law who specializes in issues related to machine learning and algorithms. “We have a law that only covers one source of health information. They are rapidly developing another source.”

The information is here.

The Road to Pseudoscientific Thinking

Julia Shaw
The Road to Pseudoscientific ThinkingScientific American
Originally published January 16, 2017

Here is the conclusion:

So, where to from here? Are there any cool, futuristic, applications of such insights? According to McColeman “I expect that category learning work from human learning will help computer vision moving forward, as we understand the regularities in the environment that people are picking up on. There’s still a lot of room for improvement in getting computer systems to notice the same things that people notice.” We need to help people, and computers, to avoid being distracted by unimportant, attention-grabbing, information.

The take-home message from this line of research seems to be: When fighting the post-truth war against pseudoscience and misinformation, make sure that important information is eye-catching and quickly understandable.

The information is here.

Tuesday, August 7, 2018

Thousands of leading AI researchers sign pledge against killer robots

Ian Sample
The Guardian
Originally posted July 18, 2018

Here is an excerpt:

The military is one of the largest funders and adopters of AI technology. With advanced computer systems, robots can fly missions over hostile terrain, navigate on the ground, and patrol under seas. More sophisticated weapon systems are in the pipeline. On Monday, the defence secretary Gavin Williamson unveiled a £2bn plan for a new RAF fighter, the Tempest, which will be able to fly without a pilot.

UK ministers have stated that Britain is not developing lethal autonomous weapons systems and that its forces will always have oversight and control of the weapons it deploys. But the campaigners warn that rapid advances in AI and other fields mean it is now feasible to build sophisticated weapons that can identify, track and fire on human targets without consent from a human controller. For many researchers, giving machines the decision over who lives and dies crosses a moral line.

“We need to make it the international norm that autonomous weapons are not acceptable. A human must always be in the loop,” said Toby Walsh, a professor of AI at the University of New South Wales in Sydney who signed the pledge.

The info is here.

Google’s AI ethics won't curb war by algorithm

Phoebe Braithwaite
Wired.com
Originally published July 5, 2018

Here is an excerpt:

One of these programmes is Project Maven, which trains artificial intelligence systems to parse footage from surveillance drones in order to “extract objects from massive amounts of moving or still imagery,” writes Drew Cukor, chief of the Algorithmic Warfare Cross-Functional Team. The programme is a key element of the US army’s efforts to select targets. One of the companies working on Maven is Google. Engineers at Google have protested their company’s involvement; their peers at companies like Amazon and Microsoft have made similar complaints, calling on their employers not to support the development of the facial recognition tool Rekognition, for use by the military, police and immigration control. For technology companies, this raises a question: should they play a role in governments’ use of force?

The US government’s policy of using armed drones to hunt its enemies abroad has long been controversial. Gibson argues that the CIA and US military are using drones to strike “far from the hot battlefield, against communities that aren't involved in an armed conflict, based on intelligence that is quite frequently wrong”. Paul Scharre, director of the technology and national security programme at the Center for a New American Security and author of Army of None says that the use of drones and computing power is making the US military a much more effective and efficient force that kills far fewer civilians than in previous wars. “We actually need tech companies like Google helping the military to do many other things,” he says.

The article is here.

Monday, August 6, 2018

Why Should We Be Good?

Matt McManus
Quillette.com
Originally posted July 7, 2018

Here are two excerpts:

The negative motivation arises from moral dogmatism. There are those who wish to dogmatically assert their own values without worrying that they may not be as universal as one might suppose. For instance, this is often the case with religious fundamentalists who worry that secular society is increasingly unmoored from proper values and traditions. Ironically, the dark underside of this moral dogmatism is often a relativistic epistemology. Ethical dogmatists do not want to be confronted with the possibility that it is possible to challenge their values because they often cannot provide good reasons to back them up.

(cut)

These issues are all of considerable philosophical interest. In what follows, I want to press on just one issue that is often missed in debates between those who believe there are universal values, and those who believe that what is ethically correct is relative to either a culture or to the subjective preference of individuals. The issue I wish to explore is this: even if we know which values are universal, why should we feel compelled to adhere to them? Put more simply, even if we know what it is to be good, why should we bother to be good? This is one of the major questions addressed by what is often called meta-ethics.

The information is here.

False Equivalence: Are Liberals and Conservatives in the U.S. Equally “Biased”?

Jonathan Baron and John T. Jost
Invited Revision, Perspectives on Psychological Science.

Abstract

On the basis of a meta-analysis of 51 studies, Ditto, Liu, Clark, Wojcik, Chen, et al. (2018) conclude that ideological “bias” is equivalent on the left and right of U.S. politics. In this commentary, we contend that this conclusion does not follow from the review and that Ditto and colleagues are too quick to embrace a false equivalence between the liberal left and the conservative right. For one thing, the issues, procedures, and materials used in studies reviewed by Ditto and colleagues were selected for purposes other than the inspection of ideological asymmetries. Consequently, methodological choices made by researchers were systematically biased to avoid producing differences between liberals and conservatives. We also consider the broader implications of a normative analysis of judgment and decision-making and demonstrate that the “bias” examined by Ditto and colleagues is not, in fact, an irrational bias, and that it is incoherent to discuss bias in the absence of standards for assessing accuracy and consistency. We find that Jost’s (2017) conclusions about domain-general asymmetries in motivated social cognition, which suggest that epistemic virtues are more prevalent among liberals than conservatives, are closer to the truth of the matter when it comes to current American politics. Finally, we question the notion that the research literature in psychology is necessarily characterized by “liberal bias,” as several authors have claimed.

Here is the end:

 If academics are disproportionately liberal—in comparison with society at large—it just might
be due to the fact that being liberal in the early 21st century is more compatible with the epistemic standards, values, and practices of academia than is being conservative.

The article is here.

See Your Surgeon Is Probably a Republican, Your Psychiatrist Probably a Democrat as an other example.

Sunday, August 5, 2018

How Do Expectations Shape Perception?

Floris P. de Lange, Micha Heilbron, & Peter Kok
Trends in Cognitive Sciences
Available online 29 June 2018

Abstract

Perception and perceptual decision-making are strongly facilitated by prior knowledge about the probabilistic structure of the world. While the computational benefits of using prior expectation in perception are clear, there are myriad ways in which this computation can be realized. We review here recent advances in our understanding of the neural sources and targets of expectations in perception. Furthermore, we discuss Bayesian theories of perception that prescribe how an agent should integrate prior knowledge and sensory information, and investigate how current and future empirical data can inform and constrain computational frameworks that implement such probabilistic integration in perception.

Highlights

  • Expectations play a strong role in determining the way we perceive the world.
  • Prior expectations can originate from multiple sources of information, and correspondingly have different neural sources, depending on where in the brain the relevant prior knowledge is stored.
  • Recent findings from both human neuroimaging and animal electrophysiology have revealed that prior expectations can modulate sensory processing at both early and late stages, and both before and after stimulus onset. The response modulation can take the form of either dampening the sensory representation or enhancing it via a process of sharpening.
  • Theoretical computational frameworks of neural sensory processing aim to explain how the probabilistic integration of prior expectations and sensory inputs results in perception.

Saturday, August 4, 2018

Sacrificial utilitarian judgments do reflect concern for the greater good: Clarification via process dissociation and the judgments of philosophers

Paul Conway, Jacob Goldstein-Greenwood, David Polaceka, & Joshua D. Greene
Cognition
Volume 179, October 2018, Pages 241–265

Abstract

Researchers have used “sacrificial” trolley-type dilemmas (where harmful actions promote the greater good) to model competing influences on moral judgment: affective reactions to causing harm that motivate characteristically deontological judgments (“the ends don’t justify the means”) and deliberate cost-benefit reasoning that motivates characteristically utilitarian judgments (“better to save more lives”). Recently, Kahane, Everett, Earp, Farias, and Savulescu (2015) argued that sacrificial judgments reflect antisociality rather than “genuine utilitarianism,” but this work employs a different definition of “utilitarian judgment.” We introduce a five-level taxonomy of “utilitarian judgment” and clarify our longstanding usage, according to which judgments are “utilitarian” simply because they favor the greater good, regardless of judges’ motivations or philosophical commitments. Moreover, we present seven studies revisiting Kahane and colleagues’ empirical claims. Studies 1a–1b demonstrate that dilemma judgments indeed relate to utilitarian philosophy, as philosophers identifying as utilitarian/consequentialist were especially likely to endorse utilitarian sacrifices. Studies 2–6 replicate, clarify, and extend Kahane and colleagues’ findings using process dissociation to independently assess deontological and utilitarian response tendencies in lay people. Using conventional analyses that treat deontological and utilitarian responses as diametric opposites, we replicate many of Kahane and colleagues’ key findings. However, process dissociation reveals that antisociality predicts reduced deontological inclinations, not increased utilitarian inclinations. Critically, we provide evidence that lay people’s sacrificial utilitarian judgments also reflect moral concerns about minimizing harm. This work clarifies the conceptual and empirical links between moral philosophy and moral psychology and indicates that sacrificial utilitarian judgments reflect genuine moral concern, in both philosophers and ordinary people.

The research is here.

Friday, August 3, 2018

Data Citizens: Why We All Care About Data Ethics

Caitlin McDonald
InfoQ.com
Originally posted July 4, 2018

Key Takeaways

  • Data citizens are impacted by the models, methods, and algorithms created by data scientists, but they have limited agency to affect the tools which are acting on them.
  • Data science ethics can draw on the conceptual frameworks in existing fields for guidance on how to approach ethical questions--specifically, in this case, civics.
  • Data scientists are also data citizens. They are acted upon by the tools of data science as well as building them. It is often where these roles collide that people have the best understanding of the importance of developing ethical systems.
  • One model for ensuring the rights of data citizens could be seeking the same level of transparency for ethical practices in data science that there are for lawyers and legislators.
  • As with other ethical movements before, like seeking greater environmental protection or fairer working conditions, implementing new rights and responsibilities at scale will take a great deal of lobbying and advocacy.



How AI is transforming the NHS

Ian Sample
The Guardian
Originally posted July 4, 2018

Here is an excerpt:

With artificial intelligence (AI), the painstaking task can be completed in minutes. For the past six months, Jena has used a Microsoft system called InnerEye to mark up scans automatically for prostate cancer patients. Men make up a third of the 2,500 cancer patients his department treats every year. When a scan is done, the images are anonymised, encrypted and sent to the InnerEye program. It outlines the prostate on each image, creates a 3D model, and sends the information back. For prostate cancer, the entire organ is irradiated.

The software learned how to mark up organs and tumours by training on scores of images from past patients that had been seen by experienced consultants. It already saves time for prostate cancer treatment. Brain tumours are next on the list.

Automating the process does more than save time. Because InnerEye trains on images marked up by leading experts, it should perform as well as a top consultant every time. The upshot is that treatment is delivered faster and more precisely. “We know that how well we do the contouring has an impact on the quality of the treatment,” Jena says. “The difference between good and less good treatment is how well we hit the tumour and how well we avoid the healthy tissues.”

The article is here.

Thursday, August 2, 2018

Europe’s biggest research fund cracks down on ‘ethics dumping’

Linda Nordling
Nature.com
Originally posted July 3, 2018

Ethics dumping — doing research deemed unethical in a scientist’s home country in a foreign setting with laxer ethical rules — will be rooted out in research funded by the European Union, officials announced last week.

Applications to the EU’s €80-billion (US$93-billion) Horizon 2020 research fund will face fresh levels of scrutiny to make sure that research practices deemed unethical in Europe are not exported to other parts of the world. Wolfgang Burtscher, the European Commission’s deputy director-general for research, made the announcement at the European Parliament in Brussels on 29 June.

Burtscher said that a new code of conduct developed to curb ethics dumping will soon be applied to all EU-funded research projects. That means applicants will be referred to the code when they submit their proposals, and ethics committees will use the document when considering grant applications.

The information is here.

Genocide hoax tests ethics of academic publishing

Reuben Rose-Redwood
The Conversation
Originally posted July 3, 2018

Here is an excerpt:

What exactly "merits exposure and debate" in scholarly journals? As the editor of a scholarly journal myself, I am a strong supporter of academic freedom. But journal editors also have a responsibility to uphold the highest standards of academic quality and the ethical integrity of scholarly publications.

When I looked into the pro-Third World Quarterly petition in more detail, I noticed that over a dozen signatories were themselves editors of scholarly journals. Did they truly believe that "any work—however controversial" should be published in their own journals in the name of academic freedom?

If they had no qualms with publishing a case for colonialism, would they likewise have no ethical concerns about publishing a work advocating a case for genocide?

The genocide hoax

In late October 2017, I sent a hoax proposal for a special issue on "The Costs and Benefits of Genocide: Towards a Balanced Debate" to 13 journal editors who had signed the petition supporting the publication of "The Case for Colonialism."

In it, I mimicked the colonialism article's argument by writing: "There is a longstanding orthodoxy that only emphasizes the negative dimensions of genocide and ethnic cleansing, ignoring the fact that there may also be benefits—however controversial—associated with these political practices, and that, in some cases, the benefits may even outweigh the costs."

As I awaited the journal editors' responses, I wondered whether such an outrageous proposal would garner any support from editors who claimed to support the publication of controversial works in scholarly journals.

The information is here.

Wednesday, August 1, 2018

65% of Americans believe they are above average in intelligence: Results of two nationally representative surveys

Patrick R. Heck, Daniel J. Simons, Christopher F. Chabris
PLoS One
Originally posted July 3, 2018

Abstract

Psychologists often note that most people think they are above average in intelligence. We sought robust, contemporary evidence for this “smarter than average” effect by asking Americans in two independent samples (total N = 2,821) whether they agreed with the statement, “I am more intelligent than the average person.” After weighting each sample to match the demographics of U.S. census data, we found that 65% of Americans believe they are smarter than average, with men more likely to agree than women. However, overconfident beliefs about one’s intelligence are not always unrealistic: more educated people were more likely to think their intelligence is above average. We suggest that a tendency to overrate one’s cognitive abilities may be a stable feature of human psychology.

The research is here.

Why our brains see the world as ‘us’ versus ‘them’

Leslie Henderson
The Conversation
Originally posted June 2018

Here is an excerpt:

As opposed to fear, distrust and anxiety, circuits of neurons in brain regions called the mesolimbic system are critical mediators of our sense of “reward.” These neurons control the release of the transmitter dopamine, which is associated with an enhanced sense of pleasure. The addictive nature of some drugs, as well as pathological gaming and gambling, are correlated with increased dopamine in mesolimbic circuits.

In addition to dopamine itself, neurochemicals such as oxytocin can significantly alter the sense of reward and pleasure, especially in relationship to social interactions, by modulating these mesolimbic circuits.

Methodological variations indicate further study is needed to fully understand the roles of these signaling pathways in people. That caveat acknowledged, there is much we can learn from the complex social interactions of other mammals.

The neural circuits that govern social behavior and reward arose early in vertebrate evolution and are present in birds, reptiles, bony fishes and amphibians, as well as mammals. So while there is not a lot of information on reward pathway activity in people during in-group versus out-group social situations, there are some tantalizing results from  studies on other mammals.

The article is here.

Tuesday, July 31, 2018

Fostering Discussion When Teaching Abortion and Other Morally and Spiritually Charged Topics

Louise P. King and Alan Penzias
AMA Journal of Ethics. July 2018, Volume 20, Number 7: 637-642.

Abstract

Best practices for teaching morally and spiritually charged topics, such as abortion, to those early in their medical training are elusive at best, especially in our current political climate. Here we advocate that our duty as educators requires that we explore these topics in a supportive environment. In particular, we must model respectful discourse for our learners in these difficult areas.

How to Approach Difficult Conversations

When working with learners early in their medical training, educators can find that best practices for discussion of morally and spiritually charged topics are elusive. In this article, we address how to meaningfully discuss and explore students’ conscientious objection to participation in a particular procedure. In particular, we consider the following questions: When, if ever, is it justifiable to define a good outcome of such teaching as changing students’ minds about their health practice beliefs, and when, if ever, is it appropriate to illuminate the negative impacts their health practice beliefs can have on patients?

The information is here.

Toward an Ethics of AI Assistants: an Initial Framework

John Danaher
Philos. Technol.
Accepted May 22, 2018

Abstract

Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines.

The information is here.

Monday, July 30, 2018

Biases Make People Vulnerable to Misinformation Spread by Social Media

Giovanni Luca Ciampaglia & Filippo Mencze
Scientific American
Originally published June 21, 2018

Here is an excerpt:

The third group of biases arises directly from the algorithms used to determine what people see online. Both social media platforms and search engines employ them. These personalization technologies are designed to select only the most engaging and relevant content for each individual user. But in doing so, it may end up reinforcing the cognitive and social biases of users, thus making them even more vulnerable to manipulation.

For instance, the detailed advertising tools built into many social media platforms let disinformation campaigners exploit confirmation bias by tailoring messages to people who are already inclined to believe them.

Also, if a user often clicks on Facebook links from a particular news source, Facebook will tend to show that person more of that site’s content. This so-called “filter bubble” effect may isolate people from diverse perspectives, strengthening confirmation bias.

The information is here.

Mental health practitioners’ reported barriers to prescription of exercise for mental health consumers

KirstenWay, Lee Kannis-Dymand, Michele Lastella, Geoff P. Lovell
Mental Health and Physical Activity
Volume 14, March 2018, Pages 52-60

Abstract

Exercise is an effective evidenced-based intervention for a range of mental health conditions, however sparse research has investigated the exercise prescription behaviours of mental health practitioners as a collective, and the barriers faced in prescribing exercise for mental health. A self-report survey was completed online by 325 mental health practitioners to identify how often they prescribe exercise for various conditions and explore their perceived barriers to exercise prescription for mental health through thematic analysis. Over 70% of the sample reported prescribing exercise regularly for depression, stress, and anxiety; however infrequent rates of prescription were reported for conditions of schizophrenia, bipolar and related disorders, and substance-related disorders. Using thematic analysis 374 statements on mental health practitioners' perceived barriers to exercise prescription were grouped into 22 initial themes and then six higher-order themes. Reported barriers to exercise prescription mostly revolved around clients' practical barriers and perspectives (41.7%) and the practitioners' knowledge and perspectives (33.2%). Of these two main themes regarding perceived barriers to exercise prescription in mental health, a lack of training (14.7%) and the client's disinclination (12.6%) were initial themes which reoccurred considerably more often than others. General practitioners, mental health nurses, and mental health managers also frequently cited barriers related to a lack of organisational support and resources. Barriers to the prescription of exercise such as lack of training and client's disinclination need to be addressed in order to overcome challenges which restrict the prescription of exercise as a therapeutic intervention.

The research is here.

Sunday, July 29, 2018

White House Ethics Lawyer Finally Reaches His Breaking Point

And give up all this?
Bess Levin
Vanity Fair
Originally posted July 26, 2018

Here is an excerpt:

Politico reports that Passantino, one of the top lawyers in the White House, has plans to quit the administration by the end of the summer, leaving “a huge hole in the White House’s legal operation.” Despite the blow his loss will represent, it’s unlikely anyone will be able to convince him to stay and take one for the team, given he’s been working in what Passantino allies see as an “impossible” job. To recap: Passantino’s primary charge—the president—has refused to follow precedent and release his tax returns, and has held onto his business assets while in office. His son Eric, who runs said business along with Don Jr., says he gives his dad quarterly financial updates. He’s got a hotel down the road from the White House where foreign governments regularly stay as a way to kiss the ring. Two of his top advisers—his daughter and son-in-law—earned at least $82 million in outside income last year while serving in government. His Cabinet secretaries regularly compete with each other for the title of Most Blatantly Corrupt Trump Official. And Passantino is supposed to be “the clean-up guy” for all of it, a close adviser to the White House joked to Politico, which they can do because they’re not the one with a gig that would make even the most hardened Washington veteran cry.

The info is here.

Saturday, July 28, 2018

Costs, needs, and integration efforts shape helping behavior toward refugees

Robert Böhm, Maik M. P. Theelen, Hannes Rusch, and Paul A. M. Van Lange
PNAS June 25, 2018. 201805601; published ahead of print June 25, 2018

Abstract

Recent political instabilities and conflicts around the world have drastically increased the number of people seeking refuge. The challenges associated with the large number of arriving refugees have revealed a deep divide among the citizens of host countries: one group welcomes refugees, whereas another rejects them. Our research aim is to identify factors that help us understand host citizens’ (un)willingness to help refugees. We devise an economic game that captures the basic structural properties of the refugee situation. We use it to investigate both economic and psychological determinants of citizens’ prosocial behavior toward refugees. In three controlled laboratory studies, we find that helping refugees becomes less likely when it is individually costly to the citizens. At the same time, helping becomes more likely with the refugees’ neediness: helping increases when it prevents a loss rather than generates a gain for the refugees. Moreover, particularly citizens with higher degrees of prosocial orientation are willing to provide help at a personal cost. When refugees have to exert a minimum level of effort to be eligible for support by the citizens, these mandatory “integration efforts” further increase prosocial citizens’ willingness to help. Our results underscore that economic factors play a key role in shaping individual refugee helping behavior but also show that psychological factors modulate how individuals respond to them. Moreover, our economic game is a useful complement to correlational survey measures and can be used for pretesting policy measures aimed at promoting prosocial behavior toward refugees.

The research is here.

Friday, July 27, 2018

Morality in the Machines

Erick Trickery
Harvard Law Bulletin
Originally posted June 26, 2018

Here is an excerpt:

In February, the Harvard and MIT researchers endorsed a revised approach in the Massachusetts House’s criminal justice bill, which calls for a bail commission to study risk-assessment tools. In late March, the House-Senate conference committee included the more cautious approach in its reconciled criminal justice bill, which passed both houses and was signed into law by Gov. Charlie Baker in April.

Meanwhile, Harvard and MIT scholars are going still deeper into the issue. Bavitz and a team of Berkman Klein researchers are developing a database of governments that use risk scores to help set bail. It will be searchable to see whether court cases have challenged a risk-score tool’s use, whether that tool is based on peer-reviewed scientific literature, and whether its formulas are public.

Many risk-score tools are created by private companies that keep their algorithms secret. That lack of transparency creates due-process concerns, says Bavitz. “Flash forward to a world where a judge says, ‘The computer tells me you’re a risk score of 5 out of 7.’ What does it mean? I don’t know. There’s no opportunity for me to lift up the hood of the algorithm.” Instead, he suggests governments could design their own risk-assessment algorithms and software, using staff or by collaborating with foundations or researchers.

Students in the ethics class agreed that risk-score programs shouldn’t be used in court if their formulas aren’t transparent, according to then HLS 3L Arjun Adusumilli. “When people’s liberty interests are at stake, we really expect a certain amount of input, feedback and appealability,” he says. “Even if the thing is statistically great, and makes good decisions, we want reasons.”

The information is here.

Informed Consent and the Role of the Treating Physician

Holly Fernandez Lynch, Steven Joffe, and Eric A. Feldman
Originally posted June 21, 2018
N Engl J Med 2018; 378:2433-2438
DOI: 10.1056/NEJMhle1800071

Here are a few excerpts:

In 2017, the Pennsylvania Supreme Court ruled that informed consent must be obtained directly by the treating physician. The authors discuss the potential implications of this ruling and argue that a team-based approach to consent is better for patients and physicians.

(cut)

Implications in Pennsylvania and Beyond

Shinal has already had a profound effect in Pennsylvania, where it represents a substantial departure from typical consent practice.  More than half the physicians who responded to a recent survey conducted by the Pennsylvania Medical Society (PAMED) reported a change in the informed-consent process in their work setting; of that group, the vast majority expressed discontent with the effect of the new approach on patient flow and the way patients are served.  Medical centers throughout the state have changed their consent policies, precluding nonphysicians from obtaining patient consent to the procedures specified in the MCARE Act and sometimes restricting the involvement of physician trainees.  Some Pennsylvania institutions have also applied the Shinal holding to research, in light of the reference in the MCARE Act to experimental products and uses, despite the clear policy of the Food and Drug Administration (FDA) allowing investigators to involve other staff in the consent process.

(cut)

Selected State Informed-Consent Laws.

Although the Shinal decision is not binding outside of Pennsylvania, cases bearing on critical ethical dimensions of consent have a history of influence beyond their own jurisdictions.

The information is here.

Thursday, July 26, 2018

Virtuous technology

Mustafa Suleyman
medium.com
Originally published June 26, 2018

Hereis an excerpt:

There are at least three important asymmetries between the world of tech and the world itself. First, the asymmetry between people who develop technologies and the communities who use them. Salaries in Silicon Valley are twice the median wage for the rest of the US and the employee base is unrepresentative when it comes to gender, race, class and more. As we have seen in other fields, this risks a disconnect between the inner workings of organisations and the societies they seek to serve.

This is an urgent problem. Women and minority groups remain badly underrepresented, and leaders need to be proactive in breaking the mould. The recent spotlight on these issues has meant that more people are aware of the need for workplace cultures to change, but these underlying inequalities also make their way into our companies in more insidious ways. Technology is not value neutral — it reflects the biases of its creators — and must be built and shaped by diverse communities if we are to minimise the risk of unintended harms.

Second, there is an asymmetry of information regarding how technology actually works, and the impact that digital systems have on everyday life. Ethical outcomes in tech depend on far more than algorithms and data: they depend on the quality of societal debate and genuine accountability.

The information is here.

Number of Canadians choosing medically assisted death jumps 30%

Kathleen Harris
www.cbc.ca
Originally posted June 21, 2018

There were 1,523 medically assisted deaths in Canada in the last six-month reporting period — a nearly 30 per cent increase over the previous six months.

Cancer was the most common underlying medical condition in reported assisted death cases, cited in about 65 per cent of all medically assisted deaths, according to the report from Health Canada.

Using data from Statistics Canada, the report shows medically assisted deaths accounted for 1.07 per cent of all deaths in the country over those six months. That is consistent with reports from other countries that have assisted death regimes, where the figure ranges from 0.3 to four per cent.

The information is here.

Wednesday, July 25, 2018

Descartes was wrong: ‘a person is a person through other persons’

Abeba Birhane
aeon.com
Originally published April 7, 2017

Here is an excerpt:

So reality is not simply out there, waiting to be uncovered. ‘Truth is not born nor is it to be found inside the head of an individual person, it is born between people collectively searching for truth, in the process of their dialogic interaction,’ Bakhtin wrote in Problems of Dostoevsky’s Poetics (1929). Nothing simply is itself, outside the matrix of relationships in which it appears. Instead, being is an act or event that must happen in the space between the self and the world.

Accepting that others are vital to our self-perception is a corrective to the limitations of the Cartesian view. Consider two different models of child psychology. Jean Piaget’s theory of cognitive development conceives of individual growth in a Cartesian fashion, as the reorganisation of mental processes. The developing child is depicted as a lone learner – an inventive scientist, struggling independently to make sense of the world. By contrast, ‘dialogical’ theories, brought to life in experiments such as Lisa Freund’s ‘doll house study’ from 1990, emphasise interactions between the child and the adult who can provide ‘scaffolding’ for how she understands the world.

A grimmer example might be solitary confinement in prisons. The punishment was originally designed to encourage introspection: to turn the prisoner’s thoughts inward, to prompt her to reflect on her crimes, and to eventually help her return to society as a morally cleansed citizen. A perfect policy for the reform of Cartesian individuals.

The information is here.

Heuristics and Public Policy: Decision Making Under Bounded Rationality

Sanjit Dhami, Ali al-Nowaihi, and Cass Sunstein
SSRN.com
Posted June 20, 2018

Abstract

How do human beings make decisions when, as the evidence indicates, the assumptions of the Bayesian rationality approach in economics do not hold? Do human beings optimize, or can they? Several decades of research have shown that people possess a toolkit of heuristics to make decisions under certainty, risk, subjective uncertainty, and true uncertainty (or Knightian uncertainty). We outline recent advances in knowledge about the use of heuristics and departures from Bayesian rationality, with particular emphasis on growing formalization of those departures, which add necessary precision. We also explore the relationship between bounded rationality and libertarian paternalism, or nudges, and show that some recent objections, founded on psychological work on the usefulness of certain heuristics, are based on serious misunderstandings.

The article can be downloaded here.

Tuesday, July 24, 2018

Amazon, Google and Microsoft Employee AI Ethics Are Best Hope For Humanity

Paul Armstrong
Forbes.com
Originally posted June 26, 2018

Here is an excerpt:

Google recently lost the 'Don't be Evil' from its Code of Conduct documents but what were once guiding words now appear to be afterthoughts, and they aren't alone. From drone use to deals with the immigration services, large tech companies are looking to monetise their creations and who can blame them - projects can cost double digit millions as companies look to maintain an edge in a continually evolving marketplace. Employees are not without a conscience it seems, and as talent becomes the one thing that companies need in this war, that power needs to wielded, or we risk runaway train scenarios. If you want an idea of where things could go read this.

China is using AI software and facial recognition to determine who can travel, using what and where. You might think this is a ways away from being used on US or UK soil, but you'd be wrong. London has cameras on pretty much all streets, and the US has Amazon's Rekognition (Orlando just abandoned its use, but other tests remain active). Employees need to be the conscious of large entities and not only the ACLU or civil liberties inclined. From racist AI to faked video using machine learning to create better fakes, how you form technology matters as much as the why. Google has already mastered the technology to convince a human it is not talking to a robot thanks to um's and ah's - Google's next job is to convince us that is a good thing.

The information is here.

Data ethics is more than just what we do with data, it’s also about who’s doing it

James Arvantitakis, Andrew Francis, and Oliver Obst
The Conversation
Originally posted June 21, 2018

If the recent Cambridge Analytica data scandal has taught us anything, it’s that the ethical cultures of our largest tech firms need tougher scrutiny.

But moral questions about what data should be collected and how it should be used are only the beginning. They raise broader questions about who gets to make those decisions in the first place.

We currently have a system in which power over the judicious and ethical use of data is overwhelmingly concentrated among white men. Research shows that the unconscious biases that emerge from a person’s upbringing and experiences can be baked into technology, resulting in negative consequences for minority groups.

(cut)

People noticed that Google Translate showed a tendency to assign feminine gender pronouns to certain jobs and masculine pronouns to others – “she is a babysitter” or “he is a doctor” – in a manner that reeked of sexism. Google Translate bases its decision about which gender to assign to a particular job on the training data it learns from. In this case, it’s picking up the gender bias that already exists in the world and feeding it back to us.

If we want to ensure that algorithms don’t perpetuate and reinforce existing biases, we need to be careful about the data we use to train algorithms. But if we hold the view that women are more likely to be babysitters and men are more likely to be doctors, then we might not even notice – and correct for – biased data in the tools we build.

So it matters who is writing the code because the code defines the algorithm, which makes the judgement on the basis of the data.

The information is here.

Monday, July 23, 2018

St. Cloud psychologist gets 3-plus years for sex with client

Nora G. Hertel
Saint Cloud Times 
Originally published June 14, 2018

Psychologist Eric Felsch will spend more than three years in prison for having sex with a patient in 2011.

Stearns County Judge Andrew Pearson sentenced Felsch Thursday to 41 months in prison for third-degree criminal sexual conduct, a felony. He pleaded guilty to the charge in April.

Felsch, 46, has a St. Cloud address.

It is against Minnesota law for a psychotherapist to have sex with a patient during or outside of a therapy session. A defendant facing that charge cannot defend himself by saying the victim consented to the sexual activity.

Sex with clients is also against ethical codes taught to psychologists.

The information is here.

A psychologist in Pennsylvania can face criminal charges for engaging in sexual relationships with a current patient.

Assessing the contextual stability of moral foundations: Evidence from a survey experiment

David Ciuk
Research and Politics
First Published June 20, 2018

Abstract

Moral foundations theory (MFT) claims that individuals use their intuitions on five “virtues” as guidelines for moral judgment, and recent research makes the case that these intuitions cause people to adopt important political attitudes, including partisanship and ideology. New work in political science, however, demonstrates not only that the causal effect of moral foundations on these political predispositions is weaker than once thought, but it also opens the door to the possibility that causality runs in the opposite direction—from political predispositions to moral foundations. In this manuscript, I build on this new work and test the extent to which partisan and ideological considerations cause individuals’ moral foundations to shift in predictable ways. The results show that while these group-based cues do exert some influence on moral foundations, the effects of outgroup cues are particularly strong. I conclude that small shifts in political context do cause MFT measures to move, and, to close, I discuss the need for continued theoretical development in MFT as well as an increased attention to measurement.

The research is here.

Sunday, July 22, 2018

Are free will believers nicer people? (Four studies suggest not)

Damien Crone and Neil Levy
Preprint
Created January 10, 2018

Abstract

Free will is widely considered a foundational component of Western moral and legal codes, and yet current conceptions of free will are widely thought to fit uncomfortably with much research in psychology and neuroscience. Recent research investigating the consequences of laypeople’s free will beliefs (FWBs) for everyday moral behavior suggest that stronger FWBs are associated with various desirable moral characteristics (e.g., greater helpfulness, less dishonesty). These findings have sparked concern regarding the potential for moral degeneration throughout society as science promotes a view of human behavior that is widely perceived to undermine the notion of free will. We report four studies (combined N =921) originally concerned with possible mediators and/or moderators of the abovementioned associations. Unexpectedly, we found no association between FWBs and moral behavior. Our findings suggest that the FWB – moral behavior association (and accompanying concerns regarding decreases in FWBs causing moral degeneration) may be overstated.

The research is here.

Saturday, July 21, 2018

Bias detectives: the researchers striving to make algorithms fair

Rachel Courtland
Nature.com
Originally posted

Here is an excerpt:

“What concerns me most is the idea that we’re coming up with systems that are supposed to ameliorate problems [but] that might end up exacerbating them,” says Kate Crawford, co-founder of the AI Now Institute, a research centre at New York University that studies the social implications of artificial intelligence.

With Crawford and others waving red flags, governments are trying to make software more accountable. Last December, the New York City Council passed a bill to set up a task force that will recommend how to publicly share information about algorithms and investigate them for bias. This year, France’s president, Emmanuel Macron, has said that the country will make all algorithms used by its government open. And in guidance issued this month, the UK government called for those working with data in the public sector to be transparent and accountable. Europe’s General Data Protection Regulation (GDPR), which came into force at the end of May, is also expected to promote algorithmic accountability.

In the midst of such activity, scientists are confronting complex questions about what it means to make an algorithm fair. Researchers such as Vaithianathan, who work with public agencies to try to build responsible and effective software, must grapple with how automated tools might introduce bias or entrench existing inequity — especially if they are being inserted into an already discriminatory social system.

The information is here.

Friday, July 20, 2018

How to Look Away

Megan Garber
The Atlantic
Originally published June 20, 2018

Here is an excerpt:

It is a dynamic—the democratic alchemy that converts seeing things into changing them—that the president and his surrogates have been objecting to, as they have defended their policy. They have been, this week (with notable absences), busily appearing on cable-news shows and giving disembodied quotes to news outlets, insisting that things aren’t as bad as they seem: that the images and the audio and the evidence are wrong not merely ontologically, but also emotionally. Don’t be duped, they are telling Americans. Your horror is incorrect. The tragedy is false. Your outrage about it, therefore, is false. Because, actually, the truth is so much more complicated than your easy emotions will allow you to believe. Actually, as Fox News host Laura Ingraham insists, the holding pens that seem to house horrors are “essentially summer camps.” And actually, as Fox & Friends’ Steve Doocy instructs, the pens are not cages so much as “walls” that have merely been “built … out of chain-link fences.” And actually, Kirstjen Nielsen wants you to remember, “We provide food, medical, education, all needs that the child requests.” And actually, too—do not be fooled by your own empathy, Tom Cotton warns—think of the child-smuggling. And of MS-13. And of sexual assault. And of soccer fields. There are so many reasons to look away, so many other situations more deserving of your outrage and your horror.

It is a neat rhetorical trick: the logic of not in my backyard, invoked not merely despite the fact that it is happening in our backyard, but because of it. With seed and sod that we ourselves have planted.

Yes, yes, there are tiny hands, reaching out for people who are not there … but those are not the point, these arguments insist and assure. To focus on those images—instead of seeing the system, a term that Nielsen and even Trump, a man not typically inclined to think in networked terms, have been invoking this week—is to miss the larger point.

The article is here.

The Psychology of Offering an Apology: Understanding the Barriers to Apologizing and How to Overcome Them

Karina Schumann
Current Directions in Psychological Science 
Vol 27, Issue 2, pp. 74 - 78
First Published March 8, 2018

Abstract

After committing an offense, a transgressor faces an important decision regarding whether and how to apologize to the person who was harmed. The actions he or she chooses to take after committing an offense can have dramatic implications for the victim, the transgressor, and their relationship. Although high-quality apologies are extremely effective at promoting reconciliation, transgressors often choose to offer a perfunctory apology, withhold an apology, or respond defensively to the victim. Why might this be? In this article, I propose three major barriers to offering high-quality apologies: (a) low concern for the victim or relationship, (b) perceived threat to the transgressor’s self-image, and (c) perceived apology ineffectiveness. I review recent research examining how these barriers affect transgressors’ apology behavior and describe insights this emerging work provides for developing methods to move transgressors toward more reparative behavior. Finally, I discuss important directions for future research.

The article is here.

Thursday, July 19, 2018

Ethics Policies Don't Build Ethical Cultures

Dori Meinert
www.shrm.org
Originally posted June 19, 2018

Here is an excerpt:

Most people think they would never voluntarily commit an unethical or illegal act. But when Gallagher asked how many people in the audience had ever received a speeding ticket, numerous hands were raised. Similarly, employees rationalize their misuse of company supplies all the time, such as shopping online on their company-issued computer during work hours.

"It's easy to make unethical choices when they are socially acceptable," he said.

But those seemingly small choices can start people down a slippery slope.

Be on the Lookout for Triggers

No one plans to destroy their career by breaking the law or violating their company's ethics policy. There are usually personal stressors that push them over the edge, triggering a "fight or flight" response. At that point, they're not thinking rationally, Gallagher said.

Financial problems, relationship problems or health issues are the most common emotional stressors, he said.

"If you're going to be an ethical leader, are you paying attention to your employees' emotional triggers?"

The information is here.

The developmental origins of moral concern: An examination of moral boundary decision making throughout childhood

Neldner K, Crimston D, Wilks M, Redshaw J, Nielsen M (2018)
PLoS ONE 13(5): e0197819. https://doi.org/10.1371/journal.pone.0197819

Abstract
Prominent theorists have made the argument that modern humans express moral concern for a greater number of entities than at any other time in our past. Moreover, adults show stable patterns in the degrees of concern they afford certain entities over others, yet it remains unknown when and how these patterns of moral decision-making manifest in development.  Children aged 4 to 10 years (N = 151) placed 24 pictures of human, animal, and environmental entities on a stratified circle representing three levels of moral concern. Although younger and older children expressed similar overall levels of moral concern, older children demonstrated a more graded understanding of concern by including more entities within the outer reaches of their moral circles (i.e., they were less likely to view moral inclusion as a simple in vs. out binary decision). With age children extended greater concern to humans than other forms of life, and more concern to vulnerable groups, such as the sick and disabled.  Notably, children’s level of concern for human entities predicted their prosocial
behavior. The current research provides novel insights into the development of our moral reasoning and its structure within childhood.

The paper is here.

Wednesday, July 18, 2018

Can Employees Force A Company To Be More Ethical?

Enrique Dans
Forbes.com
Originally posted June 19, 2018

Here is the conclusion:

Whatever the outcome, it now seems increasingly clear that if you do not agree you’re your company’s practices, if they breach basic ethics, you should listen to your conscience and make your voice heard. Which is all fine and good in a rapidly expanding technology sector such as the United States where you are likely to find another job quickly, but what about in other sectors, or in countries with higher unemployment rates or where government and industry are more closely aligned?

Can we and should we put a price on our principles? Is having a conscience the unique preserve of the wealthy and highly skilled? Obviously not, and it is good news that some employees at US companies are setting a precedent. If companies are not going to behave ethically of their own volition, at least we can count on their employees to embarrass them into doing so. Perhaps other countries and companies will follow suit…

The article is here.

Why are Americans so sad?

Monica H. Swahn
quartz.com
Originally published June 16, 2018

Suicide rates in the US have increased nearly 30% in less than 20 years, the Centers for Disease Control and Prevention reported June 7. These mind-numbing statistics were released the same week two very famous, successful and beloved people committed suicide—Kate Spade, a tremendous entrepreneur, trendsetter and fashion icon, and Anthony Bourdain, a distinguished chef and world traveler who took us on gastronomic journeys to all corners of the world through his TV shows.

Their tragic deaths, and others like them, have brought new awareness to the rapidly growing public health problem of suicide in the US. These deaths have renewed the country’s conversation about the scope of the problem. The sad truth is that suicide is the 10th leading cause of death among all Americans, and among youth and young adults, suicide is the third leading cause of death.

I believe it’s time for us to pause and to ask the question why? Why are the suicide rates increasing so fast? And, are the increasing suicide rates linked to the seeming increase in demand for drugs such as marijuana, opioids and psychiatric medicine? As a public health researcher and epidemiologist who has studied these issues for a long time, I think there may be deeper issues to explore.

Suicide: more than a mental health issue

Suicide prevention is usually focused on the individual and within the context of mental health illness, which is a very limited approach. Typically, suicide is described as an outcome of depression, anxiety, and other mental health concerns including substance use. And, these should not be trivialized; these conditions can be debilitating and life-threatening and should receive treatment. (If you or someone you know need help, call the National Suicide Prevention Lifeline at 1-800-273-8255).

The info is here.

Tuesday, July 17, 2018

Social observation increases deontological judgments in moral dilemmas

Minwoo Leea, Sunhae Sul, Hackjin Kim
Evolution and Human Behavior
Available online 18 June 2018

Abstract

A concern for positive reputation is one of the core motivations underlying various social behaviors in humans. The present study investigated how experimentally induced reputation concern modulates judgments in moral dilemmas. In a mixed-design experiment, participants were randomly assigned to the observed vs. the control group and responded to a series of trolley-type moral dilemmas either in the presence or absence of observers, respectively. While no significant baseline difference in personality traits and moral decision style were found across two groups of participants, our analyses revealed that social observation promoted deontological judgments especially for moral dilemmas involving direct physical harm (i.e., the personal moral dilemmas), yet with an overall decrease in decision confidence and significant prolongation of reaction time. Moreover, participants in the observed group, but not in the control group, showed the increased sensitivities towards warmth vs. competence traits words in the lexical decision task performed after the moral dilemma task. Our findings suggest that reputation concern, once triggered by the presence of potentially judgmental others, could activate a culturally dominant norm of warmth in various social contexts. This could, in turn, induce a series of goal-directed processes for self-presentation of warmth, leading to increased deontological judgments in moral dilemmas. The results of the present study provide insights into the reputational consequences of moral decisions that merit further exploration.

The article is here.

The Rise of the Robots and the Crisis of Moral Patiency

John Danaher
Pre-publication version of AI and Society

Abstract

This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible moral agents, and thereby reduce them to moral patients. Since that ability and willingness is central to the value system in modern liberal democratic states, the crisis of moral patiency has a broad civilization-level significance: it threatens something that is foundational to and presupposed in much contemporary moral and political discourse. I defend this argument in three parts. I start with a brief analysis of an analogous argument made (or implied) in pop culture. Though those arguments turn out to be hyperbolic and satirical, they do prove instructive as they illustrates a way in which the rise of robots could impact upon civilization, even when the robots themselves are neither malicious nor powerful enough to bring about our doom. I then introduce the argument from the crisis of moral patiency, defend its main premises and address objections.

The paper is here.

Monday, July 16, 2018

Moral fatigue: The effects of cognitive fatigue on moral reasoning

Shane Timmons and Ruth MJ Byrne
Quarterly Journal of Experimental Psychology
pp. 1–12

Abstract

We report two experiments that show a moral fatigue effect: participants who are fatigued after they have carried out a tiring cognitive task make different moral judgements compared to participants who are not fatigued. Fatigued participants tend to judge that a moral violation is less permissible even though it would have a beneficial effect, such as killing one person to save the lives of five others. The moral fatigue effect occurs when people make a judgement that focuses on the harmful action, killing one person, but not when they make a judgement that focuses on the beneficial
outcome, saving the lives of others, as shown in Experiment 1 (n=196). It also occurs for judgements about morally good actions, such as jumping onto railway tracks to save a person who has fallen there, as shown in Experiment 2 (n=187).  The results have implications for alternative explanations of moral reasoning.

The article is here.

Mind-body practices and the self: yoga and meditation do not quiet the ego, but instead boost self-enhancement

Gebauer, Jochen, Nehrlich, A.D., Stahlberg, D., et al.
Psychological Science, 1-22. (In Press)

Abstract

Mind-body practices enjoy immense public and scientific interest. Yoga and meditation are highly popular. Purportedly, they foster well-being by “quieting the ego” or, more specifically, curtailing self-enhancement. However, this ego-quieting effect contradicts an apparent psychological universal, the self-centrality principle. According to this principle, practicing any skill renders it self-central, and self-centrality breeds self-enhancement. We examined those opposing predictions in the first tests of mind-body practices’ self-enhancement effects. Experiment 1 followed 93 yoga students over 15 weeks, assessing self-centrality and self-enhancement after yoga practice (yoga condition, n = 246) and without practice (control condition, n = 231). Experiment 2 followed 162 meditators over 4 weeks (meditation condition: n = 246; control condition: n = 245). Self-enhancement was higher in the yoga (Experiment 1) and meditation (Experiment 2) conditions, and those effects were mediated by greater self-centrality. Additionally, greater self-enhancement mediated mind-body practices’ well-being benefits. Evidently, neither yoga nor meditation quiet the ego; instead, they boost self-enhancement.

The paper can be downloaded here.

Sunday, July 15, 2018

Should the police be allowed to use genetic information in public databases to track down criminals?

Bob Yirka
Phys.org
Originally posted June 8, 2018

Here is an excerpt:

The authors point out that there is no law forbidding what the police did—the genetic profiles came from people who willingly and of their own accord gave up their DNA data. But should there be? If you send a swab to Ancestry.com, for example, should the genetic profile they create be off-limits to anyone but you and them? It is doubtful that many who take such actions fully consider the ways in which their profile might be used. Most such companies routinely sell their data to pharmaceutical companies or others looking to use the data to make a profit, for example. Should they also be compelled to give up such data due to a court order? The authors suggest that if the public wants their DNA information to remain private, they need to contact their representatives and demand that legislation that lays out specific rules for data housed in public databases.

The article is here.

Saturday, July 14, 2018

10 Ways to Avoid False Memories

Christopher Chabris and Daniel Simons
Slate.com
Originally posted February 10, 2018

Here is an excerpt:

No one has, to our knowledge, tried to implant a false memory of being shot down in a helicopter. But researchers have repeatedly created other kinds of entirely false memory in the laboratory. Most famously, Elizabeth Loftus and Jacqueline Pickrell successfully convinced people that, as children, they had once been lost in a shopping mall. In another study, researchers Kimberly Wade, Maryanne Garry, Don Read, and Stephen Lindsay showed people a Photoshopped image of themselves as children, standing in the basket of a hot air balloon. Half of the participants later had either complete or partial false memories, sometimes “remembering” additional details from this event—an event that they never experienced. In a newly published study, Julia Shaw and Stephen Porter used structured interviews to convince 70 percent of their college student participants that they had committed a crime as an adolescent (theft, assault, or assault with a weapon) and that the crime had resulted in police contact. And outside the laboratory, people have fabricated rich and detailed memories of things that we can be almost 100 percent certain did not happen, such as having been abducted and impregnated by aliens.

Even memories for highly emotional events—like the Challenger explosion or the 9/11 attacks—can mutate substantially. As time passes, we can lose the link between things we’ve experienced and the details surrounding them; we remember the gist of a story, but we might not recall whether we experienced the events or just heard about them from someone else. We all experience this failure of “source memory” in small ways: Maybe you tell a friend a great joke that you heard recently, only to learn that he’s the one who told it to you. Or you recall having slammed your hand in a car door as a child, only to get into an argument over whether it happened instead to your sister. People sometimes even tell false stories directly to the people who actually experienced the original events, something that is hard to explain as intentional lying. (Just last month, Brian Williams let his exaggerated war story be told at a public event honoring one of the soldiers who had been there.)

The information is here.

Friday, July 13, 2018

Rorschach (regarding AI)

Michael Solana
Medium.com
Originally posted June 7, 2018

Here is an excerpt:

Here we approach our inscrutable abstract, and our robot Rorschach test. But in this contemporary version of the famous psychological prompts, what we are observing is not even entirely ambiguous. We are attempting to imagine a greatly-amplified mind. Here, each of us has a particularly relevant data point — our own. In trying to imagine the amplified intelligence, it is natural to imagine our own intelligence amplified. In imagining the motivations of this amplified intelligence, we naturally imagine ourselves. If, as you try to conceive of a future with machine intelligence, a monster comes to mind, it is likely you aren’t afraid of something alien at all. You’re afraid of something exactly like you. What would you do with unlimited power?

Psychological projection seems to work in several contexts outside of general artificial intelligence. In the technology industry the concept of “meritocracy” is now hotly debated. How much of your life is determined by luck, and how much by chance? There’s no answer here we know for sure, but has there ever been a better Rorschach test for separating high-achievers from people who were given what they have? Questions pertaining to human nature are almost open self-reflection. Are we basically good, with some exceptions, or are humans basically beasts, with an animal nature just barely contained by a set of slowly-eroding stories we tell ourselves — law, faith, society. The inner workings of a mind can’t be fully shared, and they can’t be observed by a neutral party. We therefore do not — can not, currently — know anything of the inner workings of people in general. But we can know ourselves. So in the face of large abstractions concerning intelligence, we hold up a mirror.

Not everyone who fears general artificial intelligence would cause harm to others. There are many people who haven’t thought deeply about these questions at all. They look to their neighbors for cues on what to think, and there is no shortage of people willing to tell them. The media has ads to sell, after all, and historically they have found great success in doing this with horror stories. But as we try to understand the people who have thought about these questions with some depth — with the depth required of a thoughtful screenplay, for example, or a book, or a company — it’s worth considering the inkblot.

The article is here.