Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Data Ethics. Show all posts
Showing posts with label Data Ethics. Show all posts

Tuesday, August 19, 2025

Data ethics and the Canadian Code of Ethics for Psychologists

Fabricius, A., O'Doherty, K., & Yen, J. (2025).
Canadian Psychology / Psychologie canadienne.
Advance online publication.

Abstract

The pervasive influence of digital data in contemporary society presents research psychologists with significant ethical challenges that have yet to be fully recognized or addressed. The rapid evolution of data technologies and integration into research practices has outpaced the guidance provided by existing ethical frameworks and regulations, leaving researchers vulnerable to unethical decision making about data. This is important to recognize because data is now imbued with substantial financial value and enables relations with many powerful entities, like governments and corporations. Accordingly, decision making about data can have far-reaching and harmful consequences for participants and society. As we approach the Canadian Code of Ethics for Psychologists’ 40th anniversary, we highlight the need for small updates to its ethical standards with respect to data practices in psychological research. We examine two common data practices that have largely escaped thorough ethical scrutiny among psychologists: the use of Amazon’s Mechanical Turk for data collection and the creation and expansion of microtargeting, including recruitment for psychological research. We read these examples and psychologists’ reactions to them against the current version of the Code. We close by offering specific recommendations for expanding the Code’s standards, though also considering the role of policy, guidelines, and position papers.
Impact Statement

This study argues that psychologists must develop a better understanding of the kinds of ethical issues their data practices raise. We offer recommendations for how the Canadian Code of Ethics for Psychologists might update its standards to account for data ethics issues and offer improved guidance. Importantly, we can no longer limit our ethical guidance on data to its role in knowledge production—we must account for the fact that data puts us in relation with corporations and governments, as well.

Here are some thoughts:

The digital data revolution has introduced significant, under-recognized ethical challenges in psychological research, necessitating urgent updates to the Canadian Code of Ethics for Psychologists. Data is no longer just a tool for knowledge—it is a valuable commodity embedded in complex power relations with corporations and governments, enabling surveillance, exploitation, and societal harm.

Two common practices illustrate these concerns. First, Amazon’s Mechanical Turk (MTurk) is widely used for data collection, yet it relies on a global workforce of “turkers” who are severely underpaid, lack labor protections, and are subject to algorithmic control. Psychologists often treat them as disposable labor, withholding payment for incomplete tasks—violating core ethical principles around fair compensation, informed consent, and protection of vulnerable populations. Turkers occupy a dual role as both research participants and precarious workers—a status unacknowledged by current ethics codes or research ethics boards (REBs).

Second, microtargeting —the use of behavioral data to predict and influence individuals—has deep roots in psychology. Research on personality profiling via social media (e.g., the MyPersonality app) enabled companies like Cambridge Analytica to manipulate voters. Now, psychologists are adopting microtargeting to recruit clinical populations, using algorithms to infer sensitive mental health conditions without users’ knowledge. This risks “outing” individuals, enabling discrimination, and transferring control of data to private, unregulated platforms.

Current ethical frameworks are outdated, focusing narrowly on data as an epistemic resource while ignoring its economic and political dimensions. The Code mentions “data” only six times and fails to address modern risks like corporate data sharing, government surveillance, or re-identification.

Tuesday, February 4, 2025

Advancing AI Data Ethics in Nursing: Future Directions for Nursing Practice, Research, and Education

Dunlap, P. a. B., & Michalowski, M. (2024).
JMIR Nursing, 7, e62678.

Abstract

The ethics of artificial intelligence (AI) are increasingly recognized due to concerns such as algorithmic bias, opacity, trust issues, data security, and fairness. Specifically, machine learning algorithms, central to AI technologies, are essential in striving for ethically sound systems that mimic human intelligence. These technologies rely heavily on data, which often remain obscured within complex systems and must be prioritized for ethical collection, processing, and usage. The significance of data ethics in achieving responsible AI was first highlighted in the broader context of health care and subsequently in nursing. This viewpoint explores the principles of data ethics, drawing on relevant frameworks and strategies identified through a formal literature review. These principles apply to real-world and synthetic data in AI and machine-learning contexts. Additionally, the data-centric AI paradigm is briefly examined, emphasizing its focus on data quality and the ethical development of AI solutions that integrate human-centered domain expertise. The ethical considerations specific to nursing are addressed, including 4 recommendations for future directions in nursing practice, research, and education and 2 hypothetical nurse-focused ethical case studies. The primary objectives are to position nurses to actively participate in AI and data ethics, thereby contributing to creating high-quality and relevant data for machine learning applications.

Here are some thoughts:

The article explores integrating AI in nursing, focusing on ethical considerations vital to patient trust and care quality. It identifies risks like bias, data privacy issues, and the erosion of human-centered care. The paper argues for interdisciplinary frameworks and education to help nurses navigate these challenges. Ethics ensure AI aligns with professional values, safeguarding equity, autonomy, and informed decision-making. With thoughtful integration, AI can empower nursing while upholding ethical standards.

Wednesday, May 10, 2023

Foundation Models are exciting, but they should not disrupt the foundations of caring

Morley, Jessica and Floridi, Luciano
(April 20, 2023).

Abstract

The arrival of Foundation Models in general, and Large Language Models (LLMs) in particular, capable of ‘passing’ medical qualification exams at or above a human level, has sparked a new wave of ‘the chatbot will see you now’ hype. It is exciting to witness such impressive technological progress, and LLMs have the potential to benefit healthcare systems, providers, and patients. However, these benefits are unlikely to be realised by propagating the myth that, just because LLMs are sometimes capable of passing medical exams, they will ever be capable of supplanting any of the main diagnostic, prognostic, or treatment tasks of a human clinician. Contrary to popular discourse, LLMs are not necessarily more efficient, objective, or accurate than human healthcare providers. They are vulnerable to errors in underlying ‘training’ data and prone to ‘hallucinating’ false information rather than facts. Moreover, there are nuanced, qualitative, or less measurable reasons why it is prudent to be mindful of hyperbolic claims regarding the transformative power ofLLMs. Here we discuss these reasons, including contextualisation, empowerment, learned intermediaries, manipulation, and empathy. We conclude that overstating the current potential of LLMs does a disservice to the complexity of healthcare and the skills of healthcare practitioners and risks a ‘costly’ new AI winter. A balanced discussion recognising the potential benefits and limitations can help avoid this outcome.

Conclusion

The technical feats achieved by foundation models in the last five years, and especially in the last six months, are undeniably impressive. Also undeniable is the fact that most healthcare systems across the world are under considerable strain. It is right, therefore, to recognise and invest in the potentially transformative power of models such as Med-PaLM and ChatGPT – healthcare systems will almost certainly benefit.  However, overstating their current potential does a disservice to the complexity of healthcare and the skills required of healthcare practitioners. Not only does this ‘hype’ risk direct patient and societal harm, but it also risks re-creating the conditions of previous AI winters when investors and enthusiasts became discouraged by technological developments that over-promised and under-delivered. This could be the most harmful outcome of all, resulting in significant opportunity costs and missed chances to benefit transform healthcare and benefit patients in smaller, but more positively impactful, ways. A balanced approach recognising the potential benefits and limitations can help avoid this outcome. 

Thursday, October 17, 2019

Why Having a Chief Data Ethics Officer is Worth Consideration

The National Law Review
Image result for chief data ethics officerOriginally published September 20, 2019

Emerging technology has vastly outpaced corporate governance and strategy, and the use of data in the past has consistently been “grab it” and figure out a way to use it and monetize it later. Today’s consumers are becoming more educated and savvy about how companies are collecting, using and monetizing their data, and are starting to make buying decisions based on privacy considerations, and complaining to regulators and law makers about how the tech industry is using their data without their control or authorization.

Although consumers’ education is slowly deepening, data privacy laws, both internationally and in the U.S., are starting to address consumers’ concerns about the vast amount of individually identifiable data about them that is collected, used and disclosed.

Data ethics is something that big tech companies are starting to look at (rightfully so), because consumers, regulators and lawmakers are requiring them to do so. But tech companies should consider looking at data ethics as a fundamental core value of the company’s mission, and should determine how they will be addressed in their corporate governance structure.

The info is here.

Wednesday, May 8, 2019

The ethics of emerging technologies

artificial intelligenceJessica Hallman
www.techxplore.com
Originally posted April 10, 2019


Here is an excerpt:

"There's a new field called data ethics," said Fonseca, associate professor in Penn State's College of Information Sciences and Technology and 2019-2020 Faculty Fellow in Penn State's Rock Ethics Institute. "We are collecting data and using it in many different ways. We need to start thinking more about how we're using it and what we're doing with it."

By approaching emerging technology with a philosophical perspective, Fonseca can explore the ethical dilemmas surrounding how we gather, manage and use information. He explained that with the rise of big data, for example, many scientists and analysts are foregoing formulating hypotheses in favor of allowing data to make inferences on particular problems.

"Normally, in science, theory drives observations. Our theoretical understanding guides both what we choose to observe and how we choose to observe it," Fonseca explained. "Now, with so much data available, science's classical picture of theory-building is under threat of being inverted, with data being suggested as the source of theories in what is being called data-driven science."

Fonseca shared these thoughts in his paper, "Cyber-Human Systems of Thought and Understanding," which was published in the April 2019 issue of the Journal of the Association of Information Sciences and Technology. Fonseca co-authored the paper with Michael Marcinowski, College of Liberal Arts, Bath Spa University, United Kingdom; and Clodoveu Davis, Computer Science Department, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil.

In the paper, the researchers propose a concept to bridge the divide between theoretical thinking and a-theoretical, data-driven science.

The info is here.

Monday, December 3, 2018

Our lack of interest in data ethics will come back to haunt us

Jayson Demers
thenextweb.com
Originally posted November 4, 2018

Here is an excerpt:

Most people understand the privacy concerns that can arise with collecting and harnessing big data, but the ethical concerns run far deeper than that.

These are just a smattering of the ethical problems in big data:

  • Ownership: Who really “owns” your personal data, like what your political preferences are, or which types of products you’ve bought in the past? Is it you? Or is it public information? What about people under the age of 18? What about information you’ve tried to privatize?
  • Bias: Biases in algorithms can have potentially destructive effects. Everything from facial recognition to chatbots can be skewed to favor one demographic over another, or one set of values over another, based on the data used to power it.
  • Transparency: Are companies required to disclose how they collect and use data? Or are they free to hide some of their efforts? More importantly, who gets to decide the answer here?
  • Consent: What does it take to “consent” to having your data harvested? Is your passive use of a platform enough? What about agreeing to multi-page, complexly worded Terms and Conditions documents?

If you haven’t heard about these or haven’t thought much about them, I can’t really blame you. We aren’t bringing these questions to the forefront of the public discussion on big data, nor are big data companies going out of their way to discuss them before issuing new solutions.

“Oops, our bad”

One of the biggest problems we keep running into is what I call the “oops, our bad” effect. The idea here is that big companies and data scientists use and abuse data however they want, outside the public eye and without having an ethical discussion about their practices. If and when the public finds out that some egregious activity took place, there’s usually a short-lived public outcry, and the company issues an apology for the actions — without really changing their practices or making up for the damage.

The info is here.

Saturday, December 3, 2016

Data Ethics: The New Competitive Advantage

Gry Hasselbalch
Tech Crunch
Originally posted November 14, 2016

Here is an excerpt:

What is data ethics?

Ethical companies in today’s big data era are doing more than just complying with data protection legislation. They also follow the spirit and vision of the legislation by listening closely to their customers. They’re implementing credible and clear transparency policies for data management. They’re only processing necessary data and developing privacy-aware corporate cultures and organizational structures. Some are developing products and services using Privacy by Design.

A data-ethical company sustains ethical values relating to data, asking: Is this something I myself would accept as a consumer? Is this something I want my children to grow up with? A company’s degree of “data ethics awareness” is not only crucial for survival in a market where consumers progressively set the bar, it’s also necessary for society as a whole. It plays a similar role as a company’s environmental conscience — essential for company survival, but also for the planet’s welfare. Yet there isn’t a one-size-fits-all solution, perfect for every ethical dilemma. We’re in an age of experimentation where laws, technology and, perhaps most importantly, our limits as individuals are tested and negotiated on a daily basis.

The article is here.

Saturday, November 26, 2016

What is data ethics?

Luciano Floridi and Mariarosaria Taddeo
Philosophical Transactions Royal Society A

This theme issue has the founding ambition of landscaping data ethics as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms (including artificial intelligence, artificial agents, machine learning and robots) and corresponding practices (including responsible innovation, programming, hacking and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values).  Data ethics builds on the foundation provided by computer and information ethics but, at the sametime, it refines the approach endorsed so far in this research field, by shifting the level of abstraction of ethical enquiries, from being information-centric to being data-centric. This shift brings into focus the different moral dimensions of all kinds of data, even data that never translate directly into information but can be used to support actions or generate behaviours, for example. It highlights the need for ethical analyses to concentrate on the content and nature of computational operations—the interactions among hardware, software and data—rather than on the variety of digital technologies that enable them. And it emphasizes the complexity of the ethical challenges posed by data science. Because of such complexity, data ethics should be developed from the start as a macroethics, that is, as an overall framework that avoids narrow, ad hoc approaches and addresses the ethical impact and implications of data science and its applications within a consistent, holistic and inclusive framework. Only as a macroethics will data ethics provide solutions that can maximize the value of data science for our societies, for all of us and for our environments.This article is part of the themed issue ‘The ethical impact of data science’.

The article is here.