Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy

Sunday, January 6, 2019

Toward an Ethics of AI Assistants: an Initial Framework

John Danaher
Philosophy and Technology:1-25 (forthcoming)

Abstract

Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines.

The paper is here.

Saturday, January 5, 2019

Emotion shapes the diffusion of moralized content in social networks

William J. Brady, Julian A. Wills, John T. Jost, Joshua A. Tucker, and Jay J. Van Bavel
PNAS July 11, 2017 114 (28) 7313-7318; published ahead of print June 26, 2017 https://doi.org/10.1073/pnas.1618923114

Abstract

Political debate concerning moralized issues is increasingly common in online social networks. However, moral psychology has yet to incorporate the study of social networks to investigate processes by which some moral ideas spread more rapidly or broadly than others. Here, we show that the expression of moral emotion is key for the spread of moral and political ideas in online social networks, a process we call “moral contagion.” Using a large sample of social media communications about three polarizing moral/political issues (n = 563,312), we observed that the presence of moral-emotional words in messages increased their diffusion by a factor of 20% for each additional word. Furthermore, we found that moral contagion was bounded by group membership; moral-emotional language increased diffusion more strongly within liberal and conservative networks, and less between them. Our results highlight the importance of emotion in the social transmission of moral ideas and also demonstrate the utility of social network methods for studying morality. These findings offer insights into how people are exposed to moral and political ideas through social networks, thus expanding models of social influence and group polarization as people become increasingly immersed in social media networks.

Significance

Twitter and other social media platforms are believed to have altered the course of numerous historical events, from the Arab Spring to the US presidential election. Online social networks have become a ubiquitous medium for discussing moral and political ideas. Nevertheless, the field of moral psychology has yet to investigate why some moral and political ideas spread more widely than others. Using a large sample of social media communications concerning polarizing issues in public policy debates (gun control, same-sex marriage, climate change), we found that the presence of moral-emotional language in political messages substantially increases their diffusion within (and less so between) ideological group boundaries. These findings offer insights into how moral ideas spread within networks during real political discussion.

Friday, January 4, 2019

The Objectivity Illusion in Medical Practice

Donald Redelmeier & Lee Ross
The Association for Psychological Science
Published November 2018

Insights into pitfalls in judgment and decision-making are essential for the practice of medicine. However, only the most exceptional physicians recognize their own personal biases and blind spots. More typically, they are like most humans in believing that they see objects, events, or issues “as they really are” and, accordingly, that others who see things differently are mistaken. This illusion of personal objectivity reflects the implicit conviction of a one-to-one correspondence between the perceived properties and the real nature of an object or event. For patients, such naïve realism means a world of red apples, loud sounds, and solid chairs. For practitioners, it means a world of red rashes, loud murmurs, and solid lymph nodes. However, a lymph node that feels normal to one physician may seem suspiciously enlarged and hard to another physician, with a resulting disagreement about the indications for a lymph node biopsy. A research study supporting a new drug or procedure may seem similarly convincing to one physician but flawed to another.

Convictions about whose perceptions are more closely attuned to reality can be a source of endless interpersonal friction. Spouses, for example, may disagree about appropriate thermostat settings, with one perceiving the room as too cold while the other finds the temperature just right. Moreover, each attributes the other’s perceptions to some pathology or idiosyncrasy.

The info is here.

Beyond safety questions, gene editing will force us to deal with a moral quandary

Josephine Johnston
STAT News
Originally published November 29, 2018

Here is an excerpt:

The majority of this criticism is motivated by major concerns about safety — we simply do not yet know enough about the impact of CRISPR-Cas9, the powerful new gene-editing tool, to use it create children. But there’s a second, equally pressing concern mixed into many of these condemnations: that gene-editing human eggs, sperm, or embryos is morally wrong.

That moral claim may prove more difficult to resolve than the safety questions, because altering the genomes of future persons — especially in ways that can be passed on generation after generation — goes against international declarations and conventions, national laws, and the ethics codes of many scientific organizations. It also just feels wrong to many people, akin to playing God.

As a bioethicist and a lawyer, I am in no position to say whether CRISPR will at some point prove safe and effective enough to justify its use in human reproductive cells or embryos. But I am willing to predict that blanket prohibitions on permanent changes to the human genome will not stand. When those prohibitions fall — as today’s announcement from the Second International Summit on Human Genome Editing suggests they will — what ethical guideposts or moral norms should replace them?

The info is here.

Thursday, January 3, 2019

As China Seeks Scientific Greatness, Some Say Ethics Are an Afterthought

Sui-Lee Wee and Elsie Chen
The New York Times
Originally published November 30, 2018

First it was a proposal to transplant a head to a new body. Then it was the world’s first cloned primates. Now it is genetically edited babies.

Those recent scientific announcements, generating reactions that went from unease to shock, had one thing in common: All involved scientists from China.

China has set its sights on becoming a leader in science, pouring millions of dollars into research projects and luring back top Western-educated Chinese talent. The country’s scientists are accustomed to attention-grabbing headlines by their colleagues as they race to dominate their fields.

But when He Jiankui announced on Monday that he had created the world’s first genetically edited babies, Chinese scientists — like those elsewhere — denounced it as a step too far. Now many are asking whether their country’s intense focus on scientific achievement has come at the expense of ethical standards.

The info is here.

Why We Need to Audit Algorithms

James Guszcza, Iyad Rahwan Will, Bible Manuel Cebrian, & Vic Katyal
Harvard Business Review
Originally published November 28, 2018

Algorithmic decision-making and artificial intelligence (AI) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalism” — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.

Ensuring that societal values are reflected in algorithms and AI technologies will require no less creativity, hard work, and innovation than developing the AI technologies themselves. We have a proposal for a good place to start: auditing. Companies have long been required to issue audited financial statements for the benefit of financial markets and other stakeholders. That’s because — like algorithms — companies’ internal operations appear as “black boxes” to those on the outside. This gives managers an informational advantage over the investing public which could be abused by unethical actors. Requiring managers to report periodically on their operations provides a check on that advantage. To bolster the trustworthiness of these reports, independent auditors are hired to provide reasonable assurance that the reports coming from the “black box” are free of material misstatement. Should we not subject societally impactful “black box” algorithms to comparable scrutiny?

The info is here.

Wednesday, January 2, 2019

When Fox News staffers break ethics rules, discipline follows — or does it?

Margaret Sullivan
The Washington Post
Originally published November 29, 2018

There are ethical standards at Fox News, we’re told.

But just what they are, or how they’re enforced, is an enduring mystery.

When Sean Hannity and Jeanine Pirro appeared onstage with President Trump at a Missouri campaign rally, the network publicly acknowledged that this ran counter to its practices.

“Fox News does not condone any talent participating in campaign events,” the network said in a statement. “This was an unfortunate distraction and has been addressed.”

Or take what happened this week.

When the staff of “Fox & Friends” was found to have provided a pre-interview script for Scott Pruitt, then the Environmental Protection Agency head, the network frowned: “This is not standard practice whatsoever and the matter is being addressed internally with those involved.”

“Not standard practice” is putting it mildly, as the Daily Beast’s Maxwell Tani — who broke the story — noted, quoting David Hawkins, formerly of CBS News and CNN, who teaches journalism at Fordham University...

The info is here.

The Intuitive Appeal of Explainable Machines

Andrew D. Selbst & Solon Barocas
Fordham Law Review -Volume 87

Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties.

Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible.


In most cases, intuition serves as the unacknowledged bridge between a descriptive account to a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself.

The info is here.

Tuesday, January 1, 2019

AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations

Floridi, L., Cowls, J., Beltrametti, M. et al.
Minds & Machines (2018).
https://doi.org/10.1007/s11023-018-9482-5

Abstract

This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.