Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Design. Show all posts
Showing posts with label Design. Show all posts

Monday, July 29, 2019

AI Ethics – Too Principled to Fail?

Brent Mittelstadt
Oxford Internet Institute
https://ssrn.com/abstract=3391293

Abstract

AI Ethics is now a global topic of discussion in academic and policy circles. At least 63 public-private initiatives have produced statements describing high-level principles, values, and other tenets to guide the ethical development, deployment, and governance of AI. According to recent meta-analyses, AI Ethics has seemingly converged on a set of principles that closely resemble the four classic principles of medical ethics. Despite the initial credibility granted to a principled approach to AI Ethics by the connection to principles in medical ethics, there are reasons to be concerned about its future impact on AI development and governance. Significant differences exist between medicine and AI development that suggest a principled approach in the latter may not enjoy success comparable to the former. Compared to medicine, AI development lacks (1) common aims and fiduciary duties, (2) professional history and norms, (3) proven methods to translate principles into practice, and (4) robust legal and professional accountability mechanisms. These differences suggest we should not yet celebrate consensus around high-level principles that hide deep political and normative disagreement.

The paper is here.

Shift from professional ethics to business ethics

The outputs of many AI Ethics initiatives resemble professional codes of ethics that address design requirements and the behaviours and values of individual professions.  The legitimacy of particular applications and their underlying business interests remain largely unquestioned.  This approach conveniently steers debate towards the transgressions of unethical individuals, and away from the collective failure of unethical businesses and business models.  Developers will always be constrained by the institutions that employ them. To be truly effective, the ethical challenges of AI cannot conceptualised as individual failures. Going forward, AI Ethics must become an ethics of AI businesses as well.

Wednesday, May 22, 2019

Healthcare portraiture and unconscious bias

Karthik Sivashanker, Kathryn Rexrode, and others
BMJ 2019;365:l1668
Published April 12, 2019
https://doi.org/10.1136/bmj.l1668

Here is an excerpt:

Conveying the right message

In this regard, healthcare organisations have opportunities to instil a feeling of belonging and comfort for all their employees and patients. A simple but critical step is to examine the effect that their use of all imagery, as exemplified by portraits, has on their constituents. Are these portraits sufficiently conveying a message of social justice and equity? Do they highlight the achievement (as with a picture of a petri dish), or the person (a picture of Alexander Fleming without sufficient acknowledgment of his contributions)? Further still, do these images reveal the values of the organisation or its biases?

At our institution in Boston there was no question that the leaders depicted had made meaningful contributions to our hospital and healthcare. After soliciting feedback through listening sessions, open forums, and inbox feedback from our art committee, employees, clinicians, and students, however, our institution agreed to hang these portraits in their respective departments. This decision aimed to balance a commitment to equity with an intent to honourably display these portraits, which have inspired generations of physicians and scientists to be their best. It also led our social justice and equity committee to tackle problems like unconscious bias and diversity in hiring. In doing so, we are acknowledging the close interplay of symbolism and policy making in perpetuating racial and sex inequities, and the importance of tackling both together.

The info is here.

Friday, September 14, 2018

What Are “Ethics in Design”?

Victoria Sgarro
slate.com
Originally posted August 13, 2018

Here is an excerpt:

As a product designer, I know that no mandate exists to integrate these ethical checks and balances in our process. While I may hear a lot of these issues raised at speaking events and industry meetups, more “practical” considerations can overshadow these conversations in my day-to-day decision making. When they have to compete with the workaday pressures of budgets, roadmaps, and clients, these questions won’t emerge as priorities organically.

Most important, then, is action. Castillo worries that the conversation about “ethics in design” could become a cliché, like “empathy” or “diversity” in tech, where it’s more talk than walk. She says it’s not surprising that ethics in tech hasn’t been addressed in depth in the past, given the industry’s lack of diversity. Because most tech employees come from socially privileged backgrounds, they may not be as attuned to ethical concerns. A designer who identifies with society’s dominant culture may have less personal need to take another perspective. Indeed, identification with a society’s majority is shown to be correlated with less critical awareness of the world outside of yourself. Castillo says that, as a black woman in America, she’s a bit wary of this conversation’s effectiveness if it remains only a conversation.

“You know how someone says, ‘Why’d you become a nurse or doctor?’ And they say, ‘I want to help people’?” asks Castillo. “Wouldn’t it be cool if someone says, ‘Why’d you become an engineer or a product designer?’ And you say, ‘I want to help people.’ ”

The info is here.

Tuesday, April 10, 2018

Should We Root for Robot Rights?

Evan Selinger
Medium.com
Originally posted February 15, 2018

Here is an excerpt:

Maybe there’s a better way forward — one where machines aren’t kept firmly in their machine-only place, humans don’t get wiped out Skynet-style, and our humanity isn’t sacrificed by giving robots a better deal.

While the legal challenges ahead may seem daunting, they pose enticing puzzles for many thoughtful legal minds, who are even now diligently embracing the task. Annual conferences like We Robot — to pick but one example — bring together the best and the brightest to imagine and propose creative regulatory frameworks that would impose accountability in various contexts on designers, insurers, sellers, and owners of autonomous systems.

From the application of centuries-old concepts like “agency” to designing cutting-edge concepts for drones and robots on the battlefield, these folks are ready to explore the hard problems of machines acting with varying shades of autonomy. For the foreseeable future, these legal theories will include clear lines of legal responsibility for the humans in the loop, particularly those who abuse technology either intentionally or though carelessness.

The social impacts of our seemingly insatiable need to interact with our devices have been drawing accelerated attention for at least a decade. From the American Academy of Pediatrics creating recommendations for limiting screen time to updating etiquette and social mores for devices while dining, we are attacking these problems through both institutional and cultural channels.

The article is here.

Wednesday, January 31, 2018

I Believe In Intelligent Design....For Robots

Matt Simon
Wired Magazine
Originally published January 3, 2018

Here is an excerpt:

Roboticists are honing their robots by essentially mimicking natural selection. Keep what works, throw out what doesn’t, to optimally adapt a robot to a particular job. “If we want to scrap something totally, we can do that,” says Nick Gravish, who studies the intersection of robotics and biology at UC San Diego. “Or we can take the best pieces from some design and put them in a new design and get rid of the things we don't need.” Think of it, then, like intelligent design—that follows the principles of natural selection.

The caveat being, biology is rather more inflexible than what roboticists are doing. After all, you can give your biped robot two extra limbs and turn it into a quadruped fairly quickly, while animals change their features—cave-dwelling species might lose their eyes, for instance—over thousands of years. “Evolution is as much a trap as a means to advance,” says Gerald Loeb, CEO and co-founder of SynTouch, which is giving robots the power to feel. “Because you get locked into a lot of hardware that worked well in previous iterations and now can't be changed because you've built your whole embryology on it.”

Evolution can still be rather explosive, though. 550 million years ago the Cambrian Explosion kicked off, giving birth to an incredible array of complex organisms. Before that, life was relatively squishier, relatively calmer. But then boom, predators a plenty, scrapping like hell to gain an edge.

The article is here.

Thursday, November 3, 2016

In the World of A.I. Ethics, the Answers Are Murky

Mike Brown
Inverse
Originally posted October 12, 2016

Here is an excerpt:

“We’re not issuing a formal code of ethics. No hard-coded rules are really possible,” Raja Chatila, chair of the initiative’s executive committee, tells Inverse. “The final aim is to ensure every technologist is educated, trained, and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems.”

It all sounds lovely, but surely a lot of this is ignoring cross-cultural differences. What if, culturally, you hold different values about how your money app should manage your checking account? A 2014 YouGov poll found that 63 percent of British citizens believed that, morally, people have a duty to contribute money to public services through taxation. In the United States, that figure was just 37 percent, with a majority instead responding that there was a stronger moral argument that people have a right to the money they earn. Is it even possible to come up with a single, universal code of ethics that could translate across cultures for advanced A.I.?

The article is here.