Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, philosophy and health care

Thursday, January 17, 2019

Americans' trust in honesty, ethics of clergy hits all-time low in Gallup ranking of professions

Stoyan Zaimov
www.christianpost.com
Originally posted December 25, 2018

Americans' view of the honesty and ethics of clergy has fallen to an all-time low in a ranking of different professions released by Gallup.

The Gallup poll, conducted between Dec. 3-12 of 1,025 U.S. adults, found that only 37 percent of respondents had a "very high" or "high” opinion of the honesty and ethical standards of clergy. Forty-three percent of people gave them an average rating, while 15 percent said they had a “low” or “very low” opinion, according to the poll that was released on Dec. 21.

The margin of sampling error for the survey was identified as plus or minus 4 percentage points at the 95 percent confidence level.

Gallup noted that the 37 percent "very high" or "high" score for clergy is the lowest since it began asking the question in 1977. The historical high of 67 percent occurred back in 1985, and the score has been dropping below the overall average positive rating of 54 percent since 2009.

"The public's views of the honesty and ethics of the clergy continue to decline after the Catholic Church was rocked again this year by more abuse scandals,” Gallup noted in its observations.

The info is here.

Neuroethics Guiding Principles for the NIH BRAIN Initiative

Henry T. Greely, Christine Grady, Khara M. Ramos, Winston Chiong and others
Journal of Neuroscience 12 December 2018, 38 (50) 10586-10588
DOI: https://doi.org/10.1523/JNEUROSCI.2077-18.2018

Introduction

Neuroscience presents important neuroethical considerations. Human neuroscience demands focused application of the core research ethics guidelines set out in documents such as the Belmont Report. Various mechanisms, including institutional review boards (IRBs), privacy rules, and the Food and Drug Administration, regulate many aspects of neuroscience research and many articles, books, workshops, and conferences address neuroethics. (Farah, 2010; Link; Link). However, responsible neuroscience research requires continual dialogue among neuroscience researchers, ethicists, philosophers, lawyers, and other stakeholders to help assess its ethical, legal, and societal implications. The Neuroethics Working Group of the National Institutes of Health (NIH) Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a group of experts providing neuroethics input to the NIH BRAIN Initiative Multi-Council Working Group, seeks to promote this dialogue by proposing the following Neuroethics Guiding Principles (Table 1).

Wednesday, January 16, 2019

What Is the Right to Privacy?

Andrei Marmor
(2015) Philosophy & Public Affairs, 43, 1, pp 3-26

The right to privacy is a curious kind of right. Most people think that we have a general right to privacy. But when you look at the kind of issues that lawyers and philosophers label as concerns about privacy, you see widely differing views about the scope of the right and the kind of cases that fall under its purview.1 Consequently, it has become difficult to articulate the underlying interest that the right to privacy is there to protect—so much so that some philosophers have come to doubt that there is any underlying interest protected by it. According to Judith Thomson, for example, privacy is a cluster of derivative rights, some of them derived from rights to own or use your property, others from the right to your person or your right to decide what to do with your body, and so on. Thomson’s position starts from a sound observation, and I will begin by explaining why. The conclusion I will reach, however, is very different. I will argue that there is a general right to privacy grounded in people’s interest in having a reasonable measure of control over the ways in which they can present themselves (and what is theirs) to others. I will strive to show that this underlying interest justifies the right to privacy and explains its proper scope, though the scope of the right might be narrower, and fuzzier in its boundaries, than is commonly understood.

The info is here.

Debate ethics of embryo models from stem cells

Nicolas Rivron, Martin Pera, Janet Rossant, Alfonso Martinez Arias, and others
Nature
Originally posted December 12, 2018

Here are some excerpts:

Four questions

Future progress depends on addressing now the ethical and policy issues that could arise.

Ultimately, individual jurisdictions will need to formulate their own policies and regulations, reflecting their values and priorities. However, we urge funding bodies, along with scientific and medical societies, to start an international discussion as a first step. Bioethicists, scientists, clinicians, legal and regulatory specialists, patient advocates and other citizens could offer at least some consensus on an appropriate trajectory for the field.

Two outputs are needed. First, guidelines for researchers; second, a reliable source of information about the current state of the research, its possible trajectory, its potential medical benefits and the key ethical and policy issues it raises. Both guidelines and information should be disseminated to journalists, ethics committees, regulatory bodies and policymakers.

Four questions in particular need attention.

Should embryo models be treated legally and ethically as human embryos, now or in the future?

Which research applications involving human embryo models are ethically acceptable?

How far should attempts to develop an intact human embryo in a dish be allowed to proceed?

Does a modelled part of a human embryo have an ethical and legal status similar to that of a complete embryo?

The info is here.

Tuesday, January 15, 2019

Cheyenne Psychologist And His Wife Sentenced To 37 Months In Prison For Health Care Fraud

Department of Justice
U.S. Attorney’s Office
District of Wyoming
Press Release of December 4, 2018

John Robert Sink, Jr., 68, and Diane Marie Sink, 63, of Cheyenne, Wyoming, were sentenced on December 3, 2018, to serve 37 months in prison for making false statements as part of a scheme to fraudulently bill Wyoming Medicaid for mental health services, which were never provided, announced United States Attorney Mark A. Klaassen. The Sinks, who are married, were also ordered to pay over $6.2 million in restitution to the Wyoming Department of Health and the United States Department of Health and Human Services, and to forfeit over $750,000 in assets traceable to the fraud, including cash, retirement accounts, vehicles, and a residence.

The Sinks were indicted in March 2018 by a federal grand jury for health care fraud, making false statements, and money laundering. At all times relevant to the indictment, John and Diane Sink operated a psychological practice in Cheyenne. John Sink, who was a licensed Ph.D. psychologist, directed mental health services. Diane Sink submitted bills to Wyoming Medicaid and managed the business and its employees. The Sinks provided services to developmentally disabled Medicaid beneficiaries and billed Medicaid for those services.

Between February 2012 and December 2016, the Sinks submitted bills to Wyoming Medicaid for $6.2 million in alleged group therapy. These bills were false and fraudulent because the services provided did not qualify as group therapy as defined by Wyoming Medicaid. The Sinks also falsely billed Medicaid for beneficiaries who were not participating in any activities, and therefore did not receive any of the claimed mental health services. When Wyoming Medicaid audited the Sinks in May 2016, the Sinks did not have necessary documentation to support their billing, so they ordered an employee to create backdated treatment plans. The Sinks then submitted these phony treatment plans to Wyoming Medicaid to justify the Sinks’ false group therapy bills, and to cover up their fraudulent billing scheme.

The pressor is here.

The ends justify the meanness: An investigation of psychopathic traits and utilitarian moral endorsement

JustinBalasha and Diana M.Falkenbach
Personality and Individual Differences
Volume 127, 1 June 2018, Pages 127-132

Abstract

Although psychopathy has traditionally been synonymous with immorality, little research exists on the ethical reasoning of psychopathic individuals. Recent examination of psychopathy and utilitarianism suggests that psychopaths' moral decision-making differs from nonpsychopaths (Koenigs et al., 2012). The current study examined the relationship between psychopathic traits (PPI-R, Lilienfeld & Widows, 2005; TriPM, Patrick, 2010) and utilitarian endorsement (moral dilemmas, Greene et al., 2001) in a college sample (n = 316). The relationships between utilitarian decisions and triarchic dimensions were explored and empathy and aggression were examined as mediating factors. Hypotheses were partially supported, with Disinhibition and Meanness traits relating to personal utilitarian decisions; aggression partially mediated the relationship between psychopathic traits and utilitarian endorsements. Implications and future directions are further discussed.

Highlights

• Authors examined the relationship between psychopathy and utilitarian decision-making.

• Empathy and aggression were explored as mediating factors.

• Disinhibition and Meanness were positively related to personal utilitarian decisions.

• Meanness, Coldheartedness, and PPI-R-II were associated with personal utilitarian decisions.

• Aggression partially mediated the relationship between psychopathy and utilitarian decisions.

The research can be found here.

Monday, January 14, 2019

Air Force Psychologist Found Guilty of Sexual Assault Under Guise of Exposure Therapy

Caitlin Foster
Business Insider
Originally published Dec. 10, 2018

A psychologist at Travis Air Force Base in California was found guilty on Friday of sexually assaulting military-officer patients who were seeking treatment for post-traumatic stress disorder, The Daily Republic reported.

Heath Sommer may face up to 11 years and eight months in prison after receiving a guilty verdict on six felony counts of sexual assault, according to the Republic.

Sommer used a treatment known as "exposure therapy" to lure his patients, who were military officers with previous sexual-assault experiences, into performing sexual activity, the Republic reported.

According to charges brought by Brian Roberts, the deputy district attorney who prosecuted the case, Sommer assaulted his patients through "fraudulent representation that the sexual penetration served a professional purpose when it served no professional purpose," the Republic reported.

The Amazing Ways Artificial Intelligence Is Transforming Genomics and Gene Editing

Bernard Marr
Forbes.com
Originally posted November 16, 2018

Here is an excerpt:

Another thing experts are working to resolve in the process of gene editing is how to prevent off-target effects—when the tools mistakenly work on the wrong gene because it looks similar to the target gene.

Artificial intelligence and machine learning help make gene editing initiatives more accurate, cheaper and easier.

The future for AI and gene technology is expected to include pharmacogenomics, genetic screening tools for newborns, enhancements to agriculture and more. While we can't predict the future, one thing is for sure: AI and machine learning will accelerate our understanding of our own genetic makeup and those of other living organisms.

The info is here.

Sunday, January 13, 2019

The bad news on human nature, in 10 findings from psychology

Christian Jarrett
aeon.co
Originally published 

Here is an excerpt:

We are vain and overconfident. Our irrationality and dogmatism might not be so bad were they married to some humility and self-insight, but most of us walk about with inflated views of our abilities and qualities, such as our driving skills, intelligence and attractiveness – a phenomenon that’s been dubbed the Lake Wobegon Effect after the fictional town where ‘all the women are strong, all the men are good-looking, and all the children are above average’. Ironically, the least skilled among us are the most prone to overconfidence (the so-called Dunning-Kruger effect). This vain self-enhancement seems to be most extreme and irrational in the case of our morality, such as in how principled and fair we think we are. In fact, even jailed criminals think they are kinder, more trustworthy and honest than the average member of the public.

We are moral hypocrites. It pays to be wary of those who are the quickest and loudest in condemning the moral failings of others – the chances are that moral preachers are as guilty themselves, but take a far lighter view of their own transgressions. In one study, researchers found that people rated the exact same selfish behaviour (giving themselves the quicker and easier of two experimental tasks on offer) as being far less fair when perpetuated by others. Similarly, there is a long-studied phenomenon known as actor-observer asymmetry, which in part describes our tendency to attribute other people’s bad deeds, such as our partner’s infidelities, to their character, while attributing the same deeds performed by ourselves to the situation at hand. These self-serving double standards could even explain the common feeling that incivility is on the increase – recent research shows that we view the same acts of rudeness far more harshly when they are committed by strangers than by our friends or ourselves.


Saturday, January 12, 2019

Monitoring Moral Virtue: When the Moral Transgressions of In-Group Members Are Judged More Severely

Karim Bettache, Takeshi Hamamura, J.A. Idrissi, R.G.J. Amenyogbo, & C. Chiu
Journal of Cross-Cultural Psychology
First Published December 5, 2018
https://doi.org/10.1177/0022022118814687

Abstract

Literature indicates that people tend to judge the moral transgressions committed by out-group members more severely than those of in-group members. However, these transgressions often conflate a moral transgression with some form of intergroup harm. There is little research examining in-group versus out-group transgressions of harmless offenses, which violate moral standards that bind people together (binding foundations). As these moral standards center around group cohesiveness, a transgression committed by an in-group member may be judged more severely. The current research presented Dutch Muslims (Study 1), American Christians (Study 2), and Indian Hindus (Study 3) with a set of fictitious stories depicting harmless and harmful moral transgressions. Consistent with our expectations, participants who strongly identified with their religious community judged harmless moral offenses committed by in-group members, relative to out-group members, more severely. In contrast, this effect was absent when participants judged harmful moral transgressions. We discuss the implications of these results.

Friday, January 11, 2019

10 ways to detect health-care lies

Lawton R. Burns and Mark V. Pauly
thehill.com
Originally posted December 9, 2018

Here is an excerpt:

Why does this kind of behavior occur? While flat-out dishonesty for short-term financial gains is an obvious answer, a more common explanation is the need to say something positive when there is nothing positive to say.

This problem is acute in health care. Suppose you are faced with the assignment of solving the ageless dilemma of reducing costs while simultaneously raising quality of care. You could respond with a message of failure or a discussion of inevitable tradeoffs.

But you could also pick an idea with some internal plausibility and political appeal, fashion some careful but conditional language and announce the launch of your program. Of course, you will add that it will take a number of years before success appears, but you and your experts will argue for the idea in concept, with the details to be worked out later.

At minimum, unqualified acceptance of such proposed ideas, even (and especially) by apparently qualified people, will waste resources and will lead to enormous frustration for your audience of politicians and outraged critics of the current system. The incentives to generate falsehoods are not likely to diminish — if anything, rising spending and stagnant health outcomes strengthen them — so it is all the more important to have an accurate and fast way to detect and deter lies in health care.

The info is here.

The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence

Julia Powles
Medium.com
Originally posted December 7, 2018

Here is an excerpt:

There are three problems with this focus on A.I. bias. The first is that addressing bias as a computational problem obscures its root causes. Bias is a social problem, and seeking to solve it within the logic of automation is always going to be inadequate.

Second, even apparent success in tackling bias can have perverse consequences. Take the example of a facial recognition system that works poorly on women of color because of the group’s underrepresentation both in the training data and among system designers. Alleviating this problem by seeking to “equalize” representation merely co-opts designers in perfecting vast instruments of surveillance and classification.

When underlying systemic issues remain fundamentally untouched, the bias fighters simply render humans more machine readable, exposing minorities in particular to additional harms.

Third — and most dangerous and urgent of all — is the way in which the seductive controversy of A.I. bias, and the false allure of “solving” it, detracts from bigger, more pressing questions. Bias is real, but it’s also a captivating diversion.

The info is here.

Thursday, January 10, 2019

China Uses "Ethics" as Censorship

China sets up a video game ethics panel in its new approval process

Owen S. Good
www.polygon.com
Originally posted December 8, 2018

In China, it’s about ethics in video games.

The South China Morning Post reports that the nation now has an “Online Game Ethics Committee,” as a part of the government’s laborious process for game censorship approvals. China Central Television, the state’s broadcaster, said this ethics-in-games committee was formed to address national concerns over internet addiction, “unsuitable content” and childhood myopia (nearsightedness, apparently with video games as a cause?)

The state TV report said the committee has already looked at 20 games, rejecting nine and ruling that the other 11 have to change “certain content.” The titles of the games were not revealed.

The info is here.

Every Leader’s Guide to the Ethics of AI

Thomas H. Davenport and Vivek Katyal
MIT Sloan Management Review Blog
Originally published

Here is an excerpt:

Leaders should ask themselves whether the AI applications they use treat all groups equally. Unfortunately, some AI applications, including machine learning algorithms, put certain groups at a disadvantage. This issue, called algorithmic bias, has been identified in diverse contexts, including judicial sentencing, credit scoring, education curriculum design, and hiring decisions. Even when the creators of an algorithm have not intended any bias or discrimination, they and their companies have an obligation to try to identify and prevent such problems and to correct them upon discovery.

Ad targeting in digital marketing, for example, uses machine learning to make many rapid decisions about what ad is shown to which consumer. Most companies don’t even know how the algorithms work, and the cost of an inappropriately targeted ad is typically only a few cents. However, some algorithms have been found to target high-paying job ads more to men, and others target ads for bail bondsmen to people with names more commonly held by African Americans. The ethical and reputational costs of biased ad-targeting algorithms, in such cases, can potentially be very high.

Of course, bias isn’t a new problem. Companies using traditional decision-making processes have made these judgment errors, and algorithms created by humans are sometimes biased as well. But AI applications, which can create and apply models much faster than traditional analytics, are more likely to exacerbate the issue. The problem becomes even more complex when black box AI approaches make interpreting or explaining the model’s logic difficult or impossible. While full transparency of models can help, leaders who consider their algorithms a competitive asset will quite likely resist sharing them.

The info is here.

Wednesday, January 9, 2019

Why It’s Easier to Make Decisions for Someone Else

Evan Polman
Harvard Business Review
Originally posted November 13, 2018

Here is an excerpt:

What we found was two-fold: Not only did participants choose differently when it was for themselves rather than for someone else, but the way they chose was different. When choosing for themselves, participants focused more on a granular level, zeroing in on the minutiae, something we described in our research as a cautious mindset. Employing a cautious mindset when making a choice means being more reserved, deliberate, and risk averse. Rather than exploring and collecting a plethora of options, the cautious mindset prefers to consider a few at a time on a deeper level, examining a cross-section of the larger whole.

But when it came to deciding for others, study participants looked more at the array of options and focused on their overall impression. They were bolder, operating from what we called an adventurous mindset. An adventurous mindset prioritizes novelty over a deeper dive into what those options actually consist of; the availability of numerous choices is more appealing than their viability. Simply put, they preferred and examined more information before making a choice, and as my previous research has shown, they recommended their choice to others with more gusto.

These findings align with my earlier work with Kyle Emich of University of Delaware on how people are more creative on behalf of others. When we are brainstorming ideas to other people’s problems, we’re inspired; we have a free flow of ideas to spread out on the table without judgment, second-guessing, or overthinking.

The info is here.

'Should we even consider this?' WHO starts work on gene editing ethics

Agence France-Presse
Originally published 3 Dec 2018

The World Health Organization is creating a panel to study the implications of gene editing after a Chinese scientist controversially claimed to have created the world’s first genetically edited babies.

“It cannot just be done without clear guidelines,” Tedros Adhanom Ghebreyesus, the head of the UN health agency, said in Geneva.

The organisation was gathering experts to discuss rules and guidelines on “ethical and social safety issues”, added Tedros, a former Ethiopian health minister.

Tedros made the comments after a medical trial, which was led by Chinese scientist He Jiankui, claimed to have successfully altered the DNA of twin girls, whose father is HIV-positive, to prevent them from contracting the virus.

His experiment has prompted widespread condemnation from the scientific community in China and abroad, as well as a backlash from the Chinese government.

The info is here.

Tuesday, January 8, 2019

The 3 faces of clinical reasoning: Epistemological explorations of disparate error reduction strategies.

Sandra Monteiro, Geoff Norman, & Jonathan Sherbino
J Eval Clin Pract. 2018 Jun;24(3):666-673.

Abstract

There is general consensus that clinical reasoning involves 2 stages: a rapid stage where 1 or more diagnostic hypotheses are advanced and a slower stage where these hypotheses are tested or confirmed. The rapid hypothesis generation stage is considered inaccessible for analysis or observation. Consequently, recent research on clinical reasoning has focused specifically on improving the accuracy of the slower, hypothesis confirmation stage. Three perspectives have developed in this line of research, and each proposes different error reduction strategies for clinical reasoning. This paper considers these 3 perspectives and examines the underlying assumptions. Additionally, this paper reviews the evidence, or lack of, behind each class of error reduction strategies. The first perspective takes an epidemiological stance, appealing to the benefits of incorporating population data and evidence-based medicine in every day clinical reasoning. The second builds on the heuristic and bias research programme, appealing to a special class of dual process reasoning models that theorizes a rapid error prone cognitive process for problem solving with a slower more logical cognitive process capable of correcting those errors. Finally, the third perspective borrows from an exemplar model of categorization that explicitly relates clinical knowledge and experience to diagnostic accuracy.

A pdf can be downloaded here.

Algorithmic governance: Developing a research agenda through the power of collective intelligence

John Danaher, Michael J Hogan, Chris Noone, Ronan Kennedy, et.al
Big Data & Society
July–December 2017: 1–21

Abstract

We are living in an algorithmic age where mathematics and computer science are coming together in powerful new ways to influence, shape and guide our behaviour and the governance of our societies. As these algorithmic governance structures proliferate, it is vital that we ensure their effectiveness and legitimacy. That is, we need to ensure that they are an effective means for achieving a legitimate policy goal that are also procedurally fair, open and unbiased. But how can we ensure that algorithmic governance structures are both? This article shares the results of a collective intelligence workshop that addressed exactly this question. The workshop brought together a multidisciplinary group of scholars to consider (a) barriers to legitimate and effective algorithmic governance and (b) the research methods needed to address the nature and impact of specific barriers. An interactive management workshop technique was used to harness the collective intelligence of this multidisciplinary group. This method enabled participants to produce a framework and research agenda for those who are concerned about algorithmic governance. We outline this research agenda below, providing a detailed map of key research themes, questions and methods that our workshop felt ought to be pursued. This builds upon existing work on research agendas for critical algorithm studies in a unique way through the method of collective intelligence.

The paper is here.

Monday, January 7, 2019

Ethics of missionary work called into question after death of American missionary John Allen Chau

Holly Meyer
Nashville Tennessean
Originally published December 2, 2018

Christians are facing scrutiny for evangelizing in remote parts of the world after members of an isolated tribe in the Bay of Bengal killed a U.S. missionary who was trying to tell them about Jesus.

The death of John Allen Chau raises questions about the ethics of missionary work and whether he acted appropriately by contacting the Sentinelese, a self-sequestered Indian tribe that has resisted outside contact for thousands of years.

It is tragic, but figuring out what can be learned from Chau's death honors his memory and passion, said Scott Harris, the missions minister at Brentwood Baptist Church and a former trustee chairman of the Southern Baptist Convention's International Mission Board.

"In general, evaluation and accountability is so needed," Harris said. "Maturing fieldworkers that have a heart for the cultures of the world will welcome honest, hard questions." 

The info is here.

The Boundary Between Our Bodies and Our Tech

Kevin Lincoln
Pacific Standard
Originally published November 8, 2018

Here is an excerpt:

"They argued that, essentially, the mind and the self are extended to those devices that help us perform what we ordinarily think of as our cognitive tasks," Lynch says. This can include items as seemingly banal and analog as a piece of paper and a pen, which help us remember, a duty otherwise performed by the brain. According to this philosophy, the shopping list, for example, becomes part of our memory, the mind spilling out beyond the confines of our skull to encompass anything that helps it think.

"Now if that thought is right, it's pretty clear that our minds have become even more radically extended than ever before," Lynch says. "The idea that our self is expanding through our phones is plausible, and that's because our phones, and our digital devices generally—our smartwatches, our iPads—all these things have become a really intimate part of how we go about our daily lives. Intimate in the sense in which they're not only on our body, but we sleep with them, we wake up with them, and the air we breathe is filled, in both a literal and figurative sense, with the trails of ones and zeros that these devices leave behind."

This gets at one of the essential differences between a smartphone and a piece of paper, which is that our relationship with our phones is reciprocal: We not only put information into the device, we also receive information from it, and, in that sense, it shapes our lives far more actively than would, say, a shopping list. The shopping list isn't suggesting to us, based on algorithmic responses to our past and current shopping behavior, what we should buy; the phone is.

The info is here.

Sunday, January 6, 2019

Toward an Ethics of AI Assistants: an Initial Framework

John Danaher
Philosophy and Technology:1-25 (forthcoming)

Abstract

Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines.

The paper is here.

Saturday, January 5, 2019

Emotion shapes the diffusion of moralized content in social networks

William J. Brady, Julian A. Wills, John T. Jost, Joshua A. Tucker, and Jay J. Van Bavel
PNAS July 11, 2017 114 (28) 7313-7318; published ahead of print June 26, 2017 https://doi.org/10.1073/pnas.1618923114

Abstract

Political debate concerning moralized issues is increasingly common in online social networks. However, moral psychology has yet to incorporate the study of social networks to investigate processes by which some moral ideas spread more rapidly or broadly than others. Here, we show that the expression of moral emotion is key for the spread of moral and political ideas in online social networks, a process we call “moral contagion.” Using a large sample of social media communications about three polarizing moral/political issues (n = 563,312), we observed that the presence of moral-emotional words in messages increased their diffusion by a factor of 20% for each additional word. Furthermore, we found that moral contagion was bounded by group membership; moral-emotional language increased diffusion more strongly within liberal and conservative networks, and less between them. Our results highlight the importance of emotion in the social transmission of moral ideas and also demonstrate the utility of social network methods for studying morality. These findings offer insights into how people are exposed to moral and political ideas through social networks, thus expanding models of social influence and group polarization as people become increasingly immersed in social media networks.

Significance

Twitter and other social media platforms are believed to have altered the course of numerous historical events, from the Arab Spring to the US presidential election. Online social networks have become a ubiquitous medium for discussing moral and political ideas. Nevertheless, the field of moral psychology has yet to investigate why some moral and political ideas spread more widely than others. Using a large sample of social media communications concerning polarizing issues in public policy debates (gun control, same-sex marriage, climate change), we found that the presence of moral-emotional language in political messages substantially increases their diffusion within (and less so between) ideological group boundaries. These findings offer insights into how moral ideas spread within networks during real political discussion.

Friday, January 4, 2019

The Objectivity Illusion in Medical Practice

Donald Redelmeier & Lee Ross
The Association for Psychological Science
Published November 2018

Insights into pitfalls in judgment and decision-making are essential for the practice of medicine. However, only the most exceptional physicians recognize their own personal biases and blind spots. More typically, they are like most humans in believing that they see objects, events, or issues “as they really are” and, accordingly, that others who see things differently are mistaken. This illusion of personal objectivity reflects the implicit conviction of a one-to-one correspondence between the perceived properties and the real nature of an object or event. For patients, such na├»ve realism means a world of red apples, loud sounds, and solid chairs. For practitioners, it means a world of red rashes, loud murmurs, and solid lymph nodes. However, a lymph node that feels normal to one physician may seem suspiciously enlarged and hard to another physician, with a resulting disagreement about the indications for a lymph node biopsy. A research study supporting a new drug or procedure may seem similarly convincing to one physician but flawed to another.

Convictions about whose perceptions are more closely attuned to reality can be a source of endless interpersonal friction. Spouses, for example, may disagree about appropriate thermostat settings, with one perceiving the room as too cold while the other finds the temperature just right. Moreover, each attributes the other’s perceptions to some pathology or idiosyncrasy.

The info is here.

Beyond safety questions, gene editing will force us to deal with a moral quandary

Josephine Johnston
STAT News
Originally published November 29, 2018

Here is an excerpt:

The majority of this criticism is motivated by major concerns about safety — we simply do not yet know enough about the impact of CRISPR-Cas9, the powerful new gene-editing tool, to use it create children. But there’s a second, equally pressing concern mixed into many of these condemnations: that gene-editing human eggs, sperm, or embryos is morally wrong.

That moral claim may prove more difficult to resolve than the safety questions, because altering the genomes of future persons — especially in ways that can be passed on generation after generation — goes against international declarations and conventions, national laws, and the ethics codes of many scientific organizations. It also just feels wrong to many people, akin to playing God.

As a bioethicist and a lawyer, I am in no position to say whether CRISPR will at some point prove safe and effective enough to justify its use in human reproductive cells or embryos. But I am willing to predict that blanket prohibitions on permanent changes to the human genome will not stand. When those prohibitions fall — as today’s announcement from the Second International Summit on Human Genome Editing suggests they will — what ethical guideposts or moral norms should replace them?

The info is here.

Thursday, January 3, 2019

As China Seeks Scientific Greatness, Some Say Ethics Are an Afterthought

Sui-Lee Wee and Elsie Chen
The New York Times
Originally published November 30, 2018

First it was a proposal to transplant a head to a new body. Then it was the world’s first cloned primates. Now it is genetically edited babies.

Those recent scientific announcements, generating reactions that went from unease to shock, had one thing in common: All involved scientists from China.

China has set its sights on becoming a leader in science, pouring millions of dollars into research projects and luring back top Western-educated Chinese talent. The country’s scientists are accustomed to attention-grabbing headlines by their colleagues as they race to dominate their fields.

But when He Jiankui announced on Monday that he had created the world’s first genetically edited babies, Chinese scientists — like those elsewhere — denounced it as a step too far. Now many are asking whether their country’s intense focus on scientific achievement has come at the expense of ethical standards.

The info is here.

Why We Need to Audit Algorithms

James Guszcza, Iyad Rahwan Will, Bible Manuel Cebrian, & Vic Katyal
Harvard Business Review
Originally published November 28, 2018

Algorithmic decision-making and artificial intelligence (AI) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalism” — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.

Ensuring that societal values are reflected in algorithms and AI technologies will require no less creativity, hard work, and innovation than developing the AI technologies themselves. We have a proposal for a good place to start: auditing. Companies have long been required to issue audited financial statements for the benefit of financial markets and other stakeholders. That’s because — like algorithms — companies’ internal operations appear as “black boxes” to those on the outside. This gives managers an informational advantage over the investing public which could be abused by unethical actors. Requiring managers to report periodically on their operations provides a check on that advantage. To bolster the trustworthiness of these reports, independent auditors are hired to provide reasonable assurance that the reports coming from the “black box” are free of material misstatement. Should we not subject societally impactful “black box” algorithms to comparable scrutiny?

The info is here.

Wednesday, January 2, 2019

When Fox News staffers break ethics rules, discipline follows — or does it?

Margaret Sullivan
The Washington Post
Originally published November 29, 2018

There are ethical standards at Fox News, we’re told.

But just what they are, or how they’re enforced, is an enduring mystery.

When Sean Hannity and Jeanine Pirro appeared onstage with President Trump at a Missouri campaign rally, the network publicly acknowledged that this ran counter to its practices.

“Fox News does not condone any talent participating in campaign events,” the network said in a statement. “This was an unfortunate distraction and has been addressed.”

Or take what happened this week.

When the staff of “Fox & Friends” was found to have provided a pre-interview script for Scott Pruitt, then the Environmental Protection Agency head, the network frowned: “This is not standard practice whatsoever and the matter is being addressed internally with those involved.”

“Not standard practice” is putting it mildly, as the Daily Beast’s Maxwell Tani — who broke the story — noted, quoting David Hawkins, formerly of CBS News and CNN, who teaches journalism at Fordham University...

The info is here.

The Intuitive Appeal of Explainable Machines

Andrew D. Selbst & Solon Barocas
Fordham Law Review -Volume 87

Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.

Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible.


In most cases, intuition serves as the unacknowledged bridge between a descriptive account to a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself.

The info is here.

Tuesday, January 1, 2019

AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations

Floridi, L., Cowls, J., Beltrametti, M. et al.
Minds & Machines (2018).
https://doi.org/10.1007/s11023-018-9482-5

Abstract

This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.