Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label End User. Show all posts
Showing posts with label End User. Show all posts

Saturday, December 3, 2022

Public attitudes value interpretability but prioritize accuracy in Artificial Intelligence

Nussberger, A. M., Luo, L., Celis, L. E., 
& Crockett, M. J. (2022). 
Nature communications, 13(1), 5821.

Abstract

As Artificial Intelligence (AI) proliferates across important social institutions, many of the most powerful AI systems available are difficult to interpret for end-users and engineers alike. Here, we sought to characterize public attitudes towards AI interpretability. Across seven studies (N = 2475), we demonstrate robust and positive attitudes towards interpretable AI among non-experts that generalize across a variety of real-world applications and follow predictable patterns. Participants value interpretability positively across different levels of AI autonomy and accuracy, and rate interpretability as more important for AI decisions involving high stakes and scarce resources. Crucially, when AI interpretability trades off against AI accuracy, participants prioritize accuracy over interpretability under the same conditions driving positive attitudes towards interpretability in the first place: amidst high stakes and scarce resources. These attitudes could drive a proliferation of AI systems making high-impact ethical decisions that are difficult to explain and understand.


Discussion

In recent years, academics, policymakers, and developers have debated whether interpretability is a fundamental prerequisite for trust in AI systems. However, it remains unknown whether non-experts–who may ultimately comprise a significant portion of end-users for AI applications–actually care about AI interpretability, and if so, under what conditions. Here, we characterise public attitudes towards interpretability in AI across seven studies. Our data demonstrates that people consider interpretability in AI to be important. Even though these positive attitudes generalise across a host of AI applications and show systematic patterns of variation, they also seem to be capricious. While people valued interpretability as similarly important for AI systems that directly implemented decisions and AI systems recommending a course of action to a human (Study 1A), they valued interpretability more for applications involving higher (relative to lower) stakes and for applications determining access to scarce (relative to abundant) resources (Studies 1A-C, Study 2). And while participants valued AI interpretability across all levels of AI accuracy when considering the two attributes independently (Study 3A), they sacrificed interpretability for accuracy when these two attributes traded off against one another (Studies 3B–C). Furthermore, participants favoured accuracy over interpretability under the same conditions that drove importance ratings of interpretability in the first place: when stakes are high and resources are scarce.

Our findings highlight that high-stakes applications, such as medical diagnosis, will generally be met with enhanced requirements towards AI interpretability. Notably, this sensitivity to stakes parallels magnitude-sensitivity as a foundational process in the cognitive appraisal of outcomes. The impact of stakes on attitudes towards interpretability were apparent not only in our experiments that manipulated stakes within a given AI-application, but also in absolute and relative levels of participants’ valuation of interpretability across applications–take, for instance, ‘hurricane first aid’ and ‘vaccine allocation’ outperforming ‘hiring decisions’, ‘insurance pricing’, and ‘standby seat prioritizing’. Conceivably, this ordering would also emerge if we ranked the applications according to the scope of auditing- and control-measures imposed on human executives, reflecting interpretability’s essential capacity of verifying appropriate and fair decision processes.

Wednesday, April 24, 2019

The Growing Marketplace For AI Ethics

Forbes Insight Team
Forbes.com
Originally posted March 27, 2019

As companies have raced to adopt artificial intelligence (AI) systems at scale, they have also sped through, and sometimes spun out, in the ethical obstacle course AI often presents.

AI-powered loan and credit approval processes have been marred by unforeseen bias. Same with recruiting tools. Smart speakers have secretly turned on and recorded thousands of minutes of audio of their owners.

Unfortunately, there’s no industry-standard, best-practices handbook on AI ethics for companies to follow—at least not yet. Some large companies, including Microsoft and Google, are developing their own internal ethical frameworks.

A number of think tanks, research organizations, and advocacy groups, meanwhile, have been developing a wide variety of ethical frameworks and guidelines for AI. Below is a brief roundup of some of the more influential models to emerge—from the Asilomar Principles to best-practice recommendations from the AI Now Institute.

“Companies need to study these ethical frameworks because this is no longer a technology question. It’s an existential human one,” says Hanson Hosein, director of the Communication Leadership program at the University of Washington. “These questions must be answered hand-in-hand with whatever’s being asked about how we develop the technology itself.”

The info is here.

Friday, September 14, 2018

What Are “Ethics in Design”?

Victoria Sgarro
slate.com
Originally posted August 13, 2018

Here is an excerpt:

As a product designer, I know that no mandate exists to integrate these ethical checks and balances in our process. While I may hear a lot of these issues raised at speaking events and industry meetups, more “practical” considerations can overshadow these conversations in my day-to-day decision making. When they have to compete with the workaday pressures of budgets, roadmaps, and clients, these questions won’t emerge as priorities organically.

Most important, then, is action. Castillo worries that the conversation about “ethics in design” could become a cliché, like “empathy” or “diversity” in tech, where it’s more talk than walk. She says it’s not surprising that ethics in tech hasn’t been addressed in depth in the past, given the industry’s lack of diversity. Because most tech employees come from socially privileged backgrounds, they may not be as attuned to ethical concerns. A designer who identifies with society’s dominant culture may have less personal need to take another perspective. Indeed, identification with a society’s majority is shown to be correlated with less critical awareness of the world outside of yourself. Castillo says that, as a black woman in America, she’s a bit wary of this conversation’s effectiveness if it remains only a conversation.

“You know how someone says, ‘Why’d you become a nurse or doctor?’ And they say, ‘I want to help people’?” asks Castillo. “Wouldn’t it be cool if someone says, ‘Why’d you become an engineer or a product designer?’ And you say, ‘I want to help people.’ ”

The info is here.

Wednesday, September 12, 2018

How Could Commercial Terms of Use and Privacy Policies Undermine Informed Consent in the Age of Mobile Health?

Cynthia E. Schairer, Caryn Kseniya Rubanovich, and Cinnamon S. Bloss
AMA J Ethics. 2018;20(9):E864-872.

Abstract

Granular personal data generated by mobile health (mHealth) technologies coupled with the complexity of mHealth systems creates risks to privacy that are difficult to foresee, understand, and communicate, especially for purposes of informed consent. Moreover, commercial terms of use, to which users are almost always required to agree, depart significantly from standards of informed consent. As data use scandals increasingly surface in the news, the field of mHealth must advocate for user-centered privacy and informed consent practices that motivate patients’ and research participants’ trust. We review the challenges and relevance of informed consent and discuss opportunities for creating new standards for user-centered informed consent processes in the age of mHealth.

The info is here.

Friday, February 2, 2018

Has Technology Lost Society's Trust?

Mustafa Suleyman
The RSA.org
Originally published January 8, 2018

Has technology lost society's trust? Mustafa Suleyman, co-founder and Head of Applied AI at DeepMind, considers what tech companies have got wrong, how to fix it and how technology companies can change the world for the better. (7 minute video)