Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Digital Ethics. Show all posts
Showing posts with label Digital Ethics. Show all posts

Monday, September 23, 2019

Three things digital ethics can learn from medical ethics

Carissa VĂ©liz
Nature Electronics 2:316-318 (2019)

Here is an excerpt:

Similarly, technological decisions are not only about facts (for example, about what is more efficient), but also about the kind of life we want and the kind of society we strive to build. The beginning of the digital age has been plagued by impositions, with technology companies often including a disclaimer in their terms and conditions that “they can unilaterally change their terms of service agreement without any notice of changes to the users”. Changes towards more respect for autonomy, however, can already be seen. With the implementation of the GDPR in Europe, for instance, tech
companies are being urged to accept that people may prefer services that are less efficient or possess less functionality if that means they get to keep their privacy.

One of the ways in which technology has failed to respect autonomy is through the use of persuasive technologies. Digital technologies that are designed to chronically distract us not only jeopardize our attention, but also our will, both individually and collectively. Technologies that constantly hijack our attention threaten the resources we need to exercise our autonomy.  If one were to ask people about their goals in life, most people would likely mention things such as “spending more time with family” — not many people would suggest “spending more time on Facebook”.  Yet most people do not accomplish their goals — we get distracted.

The info is here.

Tuesday, January 1, 2019

AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations

Floridi, L., Cowls, J., Beltrametti, M. et al.
Minds & Machines (2018).
https://doi.org/10.1007/s11023-018-9482-5

Abstract

This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.

Tuesday, October 23, 2018

Why you need a code of ethics (and how to build one that sticks)

Josh Fruhlinger
cio.com
Originally posted September 17, 2018

Here is an excerpt:

Most of us probably think of ourselves as ethical people. But within organizations built to maximize profits, many seemingly inevitably drift towards more dubious behavior, especially when it comes to user personal data. "More companies than not are collecting data just for the sake of collecting data, without having any reason as to why or what to do with it," says Philip Jones, a GDPR regulatory compliance expert at Capgemini. "Although this is an expensive and unethical approach, most businesses don’t think twice about it. I view this approach as one of the highest risks to companies today, because they have no clue where, how long, or how accurate much of their private data is on consumers."

This is the sort of organizational ethical drift that can arise in the absence of clear ethical guidelines—and it's the sort of drift that laws like the GDPR, the EU's stringent new framework for how companies must handle customer data, are meant to counter. And the temptation is certainly there to simply use such regulations as a de facto ethics policy. "The GDPR and laws like it make the process of creating a digital ethics policy much easier than it once was," says Ian McClarty, President and CEO of PhoenixNAP.  "Anything and everything that an organization does with personal data obtained from an individual must come with the explicit consent of that data owner. It’s very hard to subvert digital ethics when one’s ability to use personal data is curtailed in such a draconian fashion."

But companies cannot simply outsource their ethics codes to regulators and think that hewing to the letter of the law will keep their reputations intact. "New possibilities emerge so fast," says Mads Hennelund, a consultant at Nextwork, "that companies will be forced by market competition to apply new technologies before any regulator has been able to grasp them and impose meaningful rules or standards." He also notes that, if different silos within a company are left to their own devices and subject to their own particular forms of regulation and technology adoption, "the organization as a whole becomes ethically fragmented, consisting of multiple ethically autonomous departments."

The info is here.