Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Data Ethics. Show all posts
Showing posts with label Data Ethics. Show all posts

Wednesday, May 10, 2023

Foundation Models are exciting, but they should not disrupt the foundations of caring

Morley, Jessica and Floridi, Luciano
(April 20, 2023).

Abstract

The arrival of Foundation Models in general, and Large Language Models (LLMs) in particular, capable of ‘passing’ medical qualification exams at or above a human level, has sparked a new wave of ‘the chatbot will see you now’ hype. It is exciting to witness such impressive technological progress, and LLMs have the potential to benefit healthcare systems, providers, and patients. However, these benefits are unlikely to be realised by propagating the myth that, just because LLMs are sometimes capable of passing medical exams, they will ever be capable of supplanting any of the main diagnostic, prognostic, or treatment tasks of a human clinician. Contrary to popular discourse, LLMs are not necessarily more efficient, objective, or accurate than human healthcare providers. They are vulnerable to errors in underlying ‘training’ data and prone to ‘hallucinating’ false information rather than facts. Moreover, there are nuanced, qualitative, or less measurable reasons why it is prudent to be mindful of hyperbolic claims regarding the transformative power ofLLMs. Here we discuss these reasons, including contextualisation, empowerment, learned intermediaries, manipulation, and empathy. We conclude that overstating the current potential of LLMs does a disservice to the complexity of healthcare and the skills of healthcare practitioners and risks a ‘costly’ new AI winter. A balanced discussion recognising the potential benefits and limitations can help avoid this outcome.

Conclusion

The technical feats achieved by foundation models in the last five years, and especially in the last six months, are undeniably impressive. Also undeniable is the fact that most healthcare systems across the world are under considerable strain. It is right, therefore, to recognise and invest in the potentially transformative power of models such as Med-PaLM and ChatGPT – healthcare systems will almost certainly benefit.  However, overstating their current potential does a disservice to the complexity of healthcare and the skills required of healthcare practitioners. Not only does this ‘hype’ risk direct patient and societal harm, but it also risks re-creating the conditions of previous AI winters when investors and enthusiasts became discouraged by technological developments that over-promised and under-delivered. This could be the most harmful outcome of all, resulting in significant opportunity costs and missed chances to benefit transform healthcare and benefit patients in smaller, but more positively impactful, ways. A balanced approach recognising the potential benefits and limitations can help avoid this outcome. 

Thursday, October 17, 2019

Why Having a Chief Data Ethics Officer is Worth Consideration

The National Law Review
Image result for chief data ethics officerOriginally published September 20, 2019

Emerging technology has vastly outpaced corporate governance and strategy, and the use of data in the past has consistently been “grab it” and figure out a way to use it and monetize it later. Today’s consumers are becoming more educated and savvy about how companies are collecting, using and monetizing their data, and are starting to make buying decisions based on privacy considerations, and complaining to regulators and law makers about how the tech industry is using their data without their control or authorization.

Although consumers’ education is slowly deepening, data privacy laws, both internationally and in the U.S., are starting to address consumers’ concerns about the vast amount of individually identifiable data about them that is collected, used and disclosed.

Data ethics is something that big tech companies are starting to look at (rightfully so), because consumers, regulators and lawmakers are requiring them to do so. But tech companies should consider looking at data ethics as a fundamental core value of the company’s mission, and should determine how they will be addressed in their corporate governance structure.

The info is here.

Wednesday, May 8, 2019

The ethics of emerging technologies

artificial intelligenceJessica Hallman
www.techxplore.com
Originally posted April 10, 2019


Here is an excerpt:

"There's a new field called data ethics," said Fonseca, associate professor in Penn State's College of Information Sciences and Technology and 2019-2020 Faculty Fellow in Penn State's Rock Ethics Institute. "We are collecting data and using it in many different ways. We need to start thinking more about how we're using it and what we're doing with it."

By approaching emerging technology with a philosophical perspective, Fonseca can explore the ethical dilemmas surrounding how we gather, manage and use information. He explained that with the rise of big data, for example, many scientists and analysts are foregoing formulating hypotheses in favor of allowing data to make inferences on particular problems.

"Normally, in science, theory drives observations. Our theoretical understanding guides both what we choose to observe and how we choose to observe it," Fonseca explained. "Now, with so much data available, science's classical picture of theory-building is under threat of being inverted, with data being suggested as the source of theories in what is being called data-driven science."

Fonseca shared these thoughts in his paper, "Cyber-Human Systems of Thought and Understanding," which was published in the April 2019 issue of the Journal of the Association of Information Sciences and Technology. Fonseca co-authored the paper with Michael Marcinowski, College of Liberal Arts, Bath Spa University, United Kingdom; and Clodoveu Davis, Computer Science Department, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil.

In the paper, the researchers propose a concept to bridge the divide between theoretical thinking and a-theoretical, data-driven science.

The info is here.

Monday, December 3, 2018

Our lack of interest in data ethics will come back to haunt us

Jayson Demers
thenextweb.com
Originally posted November 4, 2018

Here is an excerpt:

Most people understand the privacy concerns that can arise with collecting and harnessing big data, but the ethical concerns run far deeper than that.

These are just a smattering of the ethical problems in big data:

  • Ownership: Who really “owns” your personal data, like what your political preferences are, or which types of products you’ve bought in the past? Is it you? Or is it public information? What about people under the age of 18? What about information you’ve tried to privatize?
  • Bias: Biases in algorithms can have potentially destructive effects. Everything from facial recognition to chatbots can be skewed to favor one demographic over another, or one set of values over another, based on the data used to power it.
  • Transparency: Are companies required to disclose how they collect and use data? Or are they free to hide some of their efforts? More importantly, who gets to decide the answer here?
  • Consent: What does it take to “consent” to having your data harvested? Is your passive use of a platform enough? What about agreeing to multi-page, complexly worded Terms and Conditions documents?

If you haven’t heard about these or haven’t thought much about them, I can’t really blame you. We aren’t bringing these questions to the forefront of the public discussion on big data, nor are big data companies going out of their way to discuss them before issuing new solutions.

“Oops, our bad”

One of the biggest problems we keep running into is what I call the “oops, our bad” effect. The idea here is that big companies and data scientists use and abuse data however they want, outside the public eye and without having an ethical discussion about their practices. If and when the public finds out that some egregious activity took place, there’s usually a short-lived public outcry, and the company issues an apology for the actions — without really changing their practices or making up for the damage.

The info is here.

Saturday, December 3, 2016

Data Ethics: The New Competitive Advantage

Gry Hasselbalch
Tech Crunch
Originally posted November 14, 2016

Here is an excerpt:

What is data ethics?

Ethical companies in today’s big data era are doing more than just complying with data protection legislation. They also follow the spirit and vision of the legislation by listening closely to their customers. They’re implementing credible and clear transparency policies for data management. They’re only processing necessary data and developing privacy-aware corporate cultures and organizational structures. Some are developing products and services using Privacy by Design.

A data-ethical company sustains ethical values relating to data, asking: Is this something I myself would accept as a consumer? Is this something I want my children to grow up with? A company’s degree of “data ethics awareness” is not only crucial for survival in a market where consumers progressively set the bar, it’s also necessary for society as a whole. It plays a similar role as a company’s environmental conscience — essential for company survival, but also for the planet’s welfare. Yet there isn’t a one-size-fits-all solution, perfect for every ethical dilemma. We’re in an age of experimentation where laws, technology and, perhaps most importantly, our limits as individuals are tested and negotiated on a daily basis.

The article is here.

Saturday, November 26, 2016

What is data ethics?

Luciano Floridi and Mariarosaria Taddeo
Philosophical Transactions Royal Society A

This theme issue has the founding ambition of landscaping data ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms (including artificial intelligence, artificial agents, machine learning and robots) and corresponding practices (including responsible innovation, programming, hacking and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values).  Data ethics builds on the foundation provided by computer and information ethics but, at the sametime, it refines the approach endorsed so far in this research field, by shifting the level of abstraction of ethical enquiries, from being information-centric to being data-centric. This shift brings into focus the different moral dimensions of all kinds of data, even data that never translate directly into information but can be used to support actions or generate behaviours, for example. It highlights the need for ethical analyses to concentrate on the content and nature of computational operations—the interactions among hardware, software and data—rather than on the variety of digital technologies that enable them. And it emphasizes the complexity of the ethical challenges posed by data science. Because of such complexity, data ethics should be developed from the start as a macroethics, that is, as an overall framework that avoids narrow, ad hoc approaches and addresses the ethical impact and implications of data science and its applications within a consistent, holistic and inclusive framework. Only as a macroethics will data ethics provide solutions that can maximize the value of data science for our societies, for all of us and for our environments.This article is part of the themed issue ‘The ethical impact of data science’.

The article is here.