Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care
Sunday, April 14, 2024
AI and the need for justification (to the patient)
Thursday, March 7, 2024
Canada Postpones Plan to Allow Euthanasia for Mentally Ill
- This plan has been postponed until 2027 due to concerns about the healthcare system's readiness and potential ethical issues.
- The original legislation passed in 2021, but concerns about safeguards and mental health support led to delays.
- This issue is complex and ethically charged, with advocates arguing for individual autonomy and opponents raising concerns about coercion and vulnerability.
- Vulnerability: Mental illness can impair judgement, raising concerns about informed consent and potential coercion.
- Safeguards: Concerns exist about insufficient safeguards to prevent abuse or exploitation.
- Mental health access: Limited access to adequate mental health treatment could contribute to undue pressure towards MAiD.
- Social inequalities: Concerns exist about disproportionate access to MAiD based on socioeconomic background.
Monday, February 5, 2024
Should Patients Be Allowed to Die From Anorexia? Is a 'Palliative' Approach to Mental Illness Ethical?
- Respecting Autonomy: Does respecting a patient's autonomy mean allowing them to choose a path that may lead to death, even if their decision is influenced by a mental illness?
- The Line Between Choice and Coercion: How do we differentiate between a genuine desire for death and succumbing to the distorted thinking patterns of anorexia?
- Futility vs. Hope: When is treatment considered futile, and when should hope for recovery, however slim, be prioritized?
- Enhanced Supportive Care: Focusing on improving the patient's quality of life through pain management, emotional support, and addressing underlying psychological issues.
- Conditional Palliative Care: Providing palliative care while continuing to offer and encourage life-sustaining treatment, with the possibility of transitioning back to active recovery if the patient shows signs of willingness.
- Advance Directives: Encouraging patients to discuss their wishes and preferences beforehand, allowing for informed decision-making when faced with difficult choices.
Wednesday, January 10, 2024
Indigenous data sovereignty—A new take on an old theme
A new kind of data revolution is unfolding around the world, one that is unlikely to be on the radar of tech giants and the power brokers of Silicon Valley. Indigenous Data Sovereignty (IDSov) is a rallying cry for Indigenous communities seeking to regain control over their information while pushing back against data colonialism and its myriad harms. Led by Indigenous academics, innovators, and knowledge-holders, IDSov networks now exist in the United States, Canada, Aotearoa (New Zealand), Australia, the Pacific, and Scandinavia, along with an international umbrella group, the Global Indigenous Data Alliance (GIDA). Together, these networks advocate for the rights of Indigenous Peoples over data that derive from them and that pertain to Nation membership, knowledge systems, customs, or territories. This lens on data sovereignty not only exceeds narrow notions of sovereignty as data localization and jurisdictional rights but also upends the assumption that the nation state is the legitimate locus of power. IDSov has thus become an important catalyst for broader conversations about what Indigenous sovereignty means in a digital world and how some measure of self-determination can be achieved under the weight of Big Tech dominance.
Indigenous Peoples are, of course, no strangers to struggles for sovereignty. There are an estimated 476 million Indigenous Peoples worldwide; the actual number is unknown because many governments do not separately identify Indigenous Peoples in their national data collections such as the population census. Colonial legacies of racism; land dispossession; and the suppression of Indigenous cultures, languages, and knowledges have had profound impacts. For example, although Indigenous Peoples make up just 6% of the global population, they account for about 20% of the world’s extreme poor. Despite this, Indigenous Peoples continue to assert their sovereignty and to uphold their responsibilities as protectors and stewards of their lands, waters, and knowledges.
The rest of the article is here.
Here is a brief summary:
This is an article about Indigenous data sovereignty. It discusses the importance of Indigenous communities having control over their own data. This is because data can be used to exploit and harm Indigenous communities. Indigenous data sovereignty is a way for Indigenous communities to protect themselves from this harm. There are a number of principles that guide Indigenous data sovereignty, including collective consent and the importance of upholding cultural protocols. Indigenous data sovereignty is still in its early stages, but it has the potential to be a powerful tool for Indigenous communities.
Saturday, October 21, 2023
Should Trackable Pill Technologies Be Used to Facilitate Adherence Among Patients Without Insight?
Friday, September 22, 2023
Police are Getting DNA Data from People who Think They Opted Out
Friday, June 23, 2023
In the US, patient data privacy is an illusion
Saturday, January 7, 2023
Artificial intelligence and consent: a feminist anti-colonial critique
Tuesday, September 27, 2022
Beyond individualism: Is there a place for relational autonomy in clinical practice and research?
Sunday, February 14, 2021
Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?
Thursday, January 7, 2021
How Might Artificial Intelligence Applications Impact Risk Management?
Saturday, December 26, 2020
Baby God: how DNA testing uncovered a shocking web of fertility fraud
Sunday, May 31, 2020
The Answer to a COVID-19 Vaccine May Lie in Our Genes, But ...
Scientific American
Originally posted 13 May 2020
Here is an excerpt:
Although the rationale for expanded genetic testing is obviously meant for the greater good, such testing could also bring with it a host of privacy and economic harms. In the past, genetic testing has also been associated with employment discrimination. Even before the current crisis, companies like 23andMe and Ancestry assembled and started operating their own private long-term large-scale databases of U.S. citizens’ genetic and health data. 23andMe and Ancestry recently announced they would use their databases to identify genetic factors that predict COVID-19 susceptibility.
Other companies are growing similar databases, for a range of purposes. And the NIH’s AllofUs program is constructing a genetic database, owned by the federal government, in which data from one million people will be used to study various diseases. These new developments indicate an urgent need for appropriate genetic data governance.
Leaders from the biomedical research community recently proposed a voluntary code of conduct for organizations constructing and sharing genetic databases. We believe that the public has a right to understand the risks of genetic databases and a right to have a say in how those databases will be governed. To ascertain public expectations about genetic data governance, we surveyed over two thousand (n=2,020) individuals who altogether are representative of the general U.S. population. After educating respondents about the key benefits and risks associated with DNA databases—using information from recent mainstream news reports—we asked how willing they would be to provide their DNA data for such a database.
The info is here.
Wednesday, February 26, 2020
Ethical and Legal Aspects of Ambient Intelligence in Hospitals
JAMA. Published online January 24, 2020.
doi:10.1001/jama.2019.21699
Ambient intelligence in hospitals is an emerging form of technology characterized by a constant awareness of activity in designated physical spaces and of the use of that awareness to assist health care workers such as physicians and nurses in delivering quality care. Recently, advances in artificial intelligence (AI) and, in particular, computer vision, the domain of AI focused on machine interpretation of visual data, have propelled broad classes of ambient intelligence applications based on continuous video capture.
One important goal is for computer vision-driven ambient intelligence to serve as a constant and fatigue-free observer at the patient bedside, monitoring for deviations from intended bedside practices, such as reliable hand hygiene and central line insertions. While early studies took place in single patient rooms, more recent work has demonstrated ambient intelligence systems that can detect patient mobilization activities across 7 rooms in an ICU ward and detect hand hygiene activity across 2 wards in 2 hospitals.
As computer vision–driven ambient intelligence accelerates toward a future when its capabilities will most likely be widely adopted in hospitals, it also raises new ethical and legal questions. Although some of these concerns are familiar from other health surveillance technologies, what is distinct about ambient intelligence is that the technology not only captures video data as many surveillance systems do but does so by targeting the physical spaces where sensitive patient care activities take place, and furthermore interprets the video such that the behaviors of patients, health care workers, and visitors may be constantly analyzed. This Viewpoint focuses on 3 specific concerns: (1) privacy and reidentification risk, (2) consent, and (3) liability.
The info is here.
Saturday, February 22, 2020
Hospitals Give Tech Giants Access to Detailed Medical Records
The Wall Street Journal
Originally published 20 Jan 20
Here is an excerpt:
Recent revelations that Alphabet Inc.’s Google is able to tap personally identifiable medical data about patients, reported by The Wall Street Journal, has raised concerns among lawmakers, patients and doctors about privacy.
The Journal also recently reported that Google has access to more records than first disclosed in a deal with the Mayo Clinic.
Mayo officials say the deal allows the Rochester, Minn., hospital system to share personal information, though it has no current plans to do so.
“It was not our intention to mislead the public,” said Cris Ross, Mayo’s chief information officer.
Dr. David Feinberg, head of Google Health, said Google is one of many companies with hospital agreements that allow the sharing of personally identifiable medical data to test products used in treatment and operations.
(cut)
Amazon, Google, IBM and Microsoft are vying for hospitals’ business in the cloud storage market in part by offering algorithms and technology features. To create and launch algorithms, tech companies are striking separate deals for access to medical-record data for research, development and product pilots.
The Health Insurance Portability and Accountability Act, or HIPAA, lets hospitals confidentially send data to business partners related to health insurance, medical devices and other services.
The law requires hospitals to notify patients about health-data uses, but they don’t have to ask for permission.
Data that can identify patients—including name and Social Security number—can’t be shared unless such records are needed for treatment, payment or hospital operations. Deals with tech companies to develop apps and algorithms can fall under these broad umbrellas. Hospitals aren’t required to notify patients of specific deals.
The info is here.
Thursday, January 23, 2020
Colleges want freshmen to use mental health apps. But are they risking students’ privacy?
The New York Times
Originally posted 2 Jan 20
Here are two excepts:
TAO Connect is just one of dozens of mental health apps permeating college campuses in recent years. In addition to increasing the bandwidth of college counseling centers, the apps offer information and resources on mental health issues and wellness. But as student demand for mental health services grows, and more colleges turn to digital platforms, experts say universities must begin to consider their role as stewards of sensitive student information and the consequences of encouraging or mandating these technologies.
The rise in student wellness applications arrives as mental health problems among college students have dramatically increased. Three out of 5 U.S. college students experience overwhelming anxiety, and 2 in 5 students reported debilitating depression, according to a 2018 survey from the American College Health Association.
Even so, only about 15 percent of undergraduates seek help at a university counseling center. These apps have begun to fill students’ needs by providing ongoing access to traditional mental health services without barriers such as counselor availability or stigma.
(cut)
“If someone wants help, they don’t care how they get that help,” said Lynn E. Linde, chief knowledge and learning officer for the American Counseling Association. “They aren’t looking at whether this person is adequately credentialed and are they protecting my rights. They just want help immediately.”
Yet she worried that students may be giving up more information than they realize and about the level of coercion a school can exert by requiring students to accept terms of service they otherwise wouldn’t agree to.
“Millennials understand that with the use of their apps they’re giving up privacy rights. They don’t think to question it,” Linde said.
The info is here.
Tuesday, January 21, 2020
How Could Commercial Terms of Use and Privacy Policies Undermine Informed Consent in the Age of Mobile Health?
doi: 10.1001/amajethics.2018.864.
Abstract
Granular personal data generated by mobile health (mHealth) technologies coupled with the complexity of mHealth systems creates risks to privacy that are difficult to foresee, understand, and communicate, especially for purposes of informed consent. Moreover, commercial terms of use, to which users are almost always required to agree, depart significantly from standards of informed consent. As data use scandals increasingly surface in the news, the field of mHealth must advocate for user-centered privacy and informed consent practices that motivate patients’ and research participants’ trust. We review the challenges and relevance of informed consent and discuss opportunities for creating new standards for user-centered informed consent processes in the age of mHealth.
The info is here.
Thursday, January 2, 2020
The Tricky Ethics of Google's Project Nightingale Effort
nextgov.com
Originally posted 3 Dec 19
The nation’s second-largest health system, Ascension, has agreed to allow the software behemoth Google access to tens of millions of patient records. The partnership, called Project Nightingale, aims to improve how information is used for patient care. Specifically, Ascension and Google are trying to build tools, including artificial intelligence and machine learning, “to make health records more useful, more accessible and more searchable” for doctors.
Ascension did not announce the partnership: The Wall Street Journal first reported it.
Patients and doctors have raised privacy concerns about the plan. Lack of notice to doctors and consent from patients are the primary concerns.
As a public health lawyer, I study the legal and ethical basis for using data to promote public health. Information can be used to identify health threats, understand how diseases spread and decide how to spend resources. But it’s more complicated than that.
The law deals with what can be done with data; this piece focuses on ethics, which asks what should be done.
Beyond Hippocrates
Big-data projects like this one should always be ethically scrutinized. However, data ethics debates are often narrowly focused on consent issues.
In fact, ethical determinations require balancing different, and sometimes competing, ethical principles. Sometimes it might be ethical to collect and use highly sensitive information without getting an individual’s consent.
The info is here.
Tuesday, December 24, 2019
DNA genealogical databases are a gold mine for police, but with few rules and little transparency
- There is no uniform approach for when detectives turn to genealogical databases to solve cases. In some departments, they are to be used only as a last resort. Others are putting them at the center of their investigative process. Some, like Orlando, have no policies at all.
- When DNA services were used, law enforcement generally declined to provide details to the public, including which companies detectives got the match from. The secrecy made it difficult to understand the extent to which privacy was invaded, how many people came under investigation, and what false leads were generated.
- California prosecutors collaborated with a Texas genealogy company at the outset of what became a $2-million campaign to spotlight the heinous crimes they can solve with consumer DNA. Their goal is to encourage more people to make their DNA available to police matching.
Tuesday, November 19, 2019
Medical board declines to act against fertility doctor who inseminated woman with his own sperm
Dr. McMorries |
wfaa.com
Originally posted Oct 28, 2019
The Texas Medical Board has declined to act against a fertility doctor who inseminated a woman with his own sperm rather than from a donor the mother selected.
Though Texas lawmakers have now made such an act illegal, the Texas Medical Board found the actions did not “fall below the acceptable standard of care,” and declined further review, according to a response to a complaint obtained by WFAA.
In a follow-up email, a spokesperson told WFAA the board was hamstrung because it can't review complaints for instances that happened seven years or more past the medical treatment.
The complaint was filed on behalf of 32-year-old Eve Wiley, of Dallas, who only recently learned her biological father wasn't the sperm donor selected by her mother. Instead, Wiley discovered her biological father was her mother’s fertility doctor in Nacogdoches.
Now 65, Wiley's mother, Margo Williams, had sought help from Dr. Kim McMorries because her husband was infertile.
The info is here.