Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Wednesday, June 23, 2021

Experimental Regulations for AI: Sandboxes for Morals and Mores

Ranchordas, Sofia
Morals and Machines (vol.1, 2021)
Available at SSRN: 

Abstract

Recent EU legislative and policy initiatives aim to offer flexible, innovation-friendly, and future-proof regulatory frameworks. Key examples are the EU Coordinated Plan on AI and the recently published EU AI Regulation Proposal which refer to the importance of experimenting with regulatory sandboxes so as to balance innovation in AI against its potential risks. Originally developed in the Fintech sector, regulatory sandboxes create a testbed for a selected number of innovative projects, by waiving otherwise applicable rules, guiding compliance, or customizing enforcement. Despite the burgeoning literature on regulatory sandboxes and the regulation of AI, the legal, methodological, and ethical challenges of regulatory sandboxes have remained understudied. This exploratory article delves into the some of the benefits and intricacies of employing experimental legal instruments in the context of the regulation of AI. This article’s contribution is twofold: first, it contextualizes the adoption of regulatory sandboxes in the broader discussion on experimental approaches to regulation; second, it offers a reflection on the steps ahead for the design and implementation of AI regulatory sandboxes.

(cut)

In conclusion, AI regulatory sandboxes are not the answer to more innovation in AI. They are part of the path to a more forward-looking approach to the interaction between law and technology. This new approach will most certainly be welcomed with reluctance in years to come as it disrupts existing dogmas pertaining to the way in which we conceive the principle of legal certainty and the reactive—rather than anticipatory—nature of law. However, traditional law and regulation were designed with human agents and enigmas in mind. Many of the problems generated by AI (discrimination, power asymmetries, and manipulation) are still human but their scale and potential for harms (and benefits) have long ceased to be. It is thus time to rethink our fundamental approach to regulation and refocus on the new regulatory subject before us.

Friday, May 14, 2021

The Internet as Cognitive Enhancement

Voinea, C., Vică, C., Mihailov, E. et al. 
Sci Eng Ethics 26, 2345–2362 (2020). 
https://doi.org/10.1007/s11948-020-00210-8

Abstract

The Internet has been identified in human enhancement scholarship as a powerful cognitive enhancement technology. It offers instant access to almost any type of information, along with the ability to share that information with others. The aim of this paper is to critically assess the enhancement potential of the Internet. We argue that unconditional access to information does not lead to cognitive enhancement. The Internet is not a simple, uniform technology, either in its composition, or in its use. We will look into why the Internet as an informational resource currently fails to enhance cognition. We analyze some of the phenomena that emerge from vast, continual fluxes of information–information overload, misinformation and persuasive design—and show how they could negatively impact users’ cognition. Methods for mitigating these negative impacts are then advanced: individual empowerment, better collaborative systems for sorting and categorizing information, and the use of artificial intelligence assistants that could guide users through the informational space of today’s Internet.

Conclusions

Although the Internet is one of the main drivers of change and evolution, its capacity to radically transform human cognition is exaggerated. No doubt this technology has improved numerous areas of our lives by facilitating access to and exchange of knowledge. However, its cognitive enhancement potential is not as clear as originally assumed. Too much information, misinformation, and the exploitation of users’ attention through persuasive design, could result in a serious decrease of users’ cognitive performance. The Internet is also an environment where users’ cognitive capacities are put under stress and their biases exploited.

Thursday, May 13, 2021

Technology and the Value of Trust: Can we trust technology? Should we?

John Danaher
Philosophical Disquisitions
Originally published 30 Mar 21

Can we trust technology? Should we try to make technology, particularly AI, more trustworthy? These are questions that have perplexed philosophers and policy-makers in recent years. The EU’s High Level Expert Group on AI has, for example, recommended that the primary goal of AI policy and law in the EU should be to make the technology more trustworthy. But some philosophers have critiqued this goal as being borderline incoherent. You cannot trust AI, they say, because AI is just a thing. Trust can exist between people and other people, not between people and things.

This is an old debate. Trust is a central value in human society. The fact that I can trust my partner not to betray me is one of the things that makes our relationship workable and meaningful. The fact that I can trust my neighbours not to kill me is one of the things that allows me to sleep at night. Indeed, so implicit is this trust that I rarely think about it. It is one of the background conditions that makes other things in my life possible. Still, it is true that when I think about trust, and when I think about what it is that makes trust valuable, I usually think about trust in my relationships with other people, not my relationships with things.

But would it be so terrible to talk about trust in technology? Should we use some other term instead such as ‘reliable’ or ‘confidence-inspiring’? Or should we, as some blockchain enthusiasts have argued, use technology to create a ‘post-trust’ system of social governance?

I want to offer some quick thoughts on these questions in this article. I will do so in three stages. First, I will briefly review some of the philosophical debates about trust in people and trust in things. Second, I will consider the value of trust, distinguishing between its intrinsic and extrinsic components. Third, I will suggest that it is meaningful to talk about trust in technology, but that the kind of trust we have in technology has a different value to the kind of trust we have in other people. Finally, I will argue that most talk about building ‘trustworthy’ technology is misleading: the goal of most of these policies is to obviate or override the need for trust.

Monday, May 10, 2021

Do Brain Implants Change Your Identity?

Christine Kenneally
The New Yorker
Originally posted 19 Apr 21

Here are two excerpts:

Today, at least two hundred thousand people worldwide, suffering from a wide range of conditions, live with a neural implant of some kind. In recent years, Mark Zuckerberg, Elon Musk, and Bryan Johnson, the founder of the payment-processing company Braintree, all announced neurotechnology projects for restoring or even enhancing human abilities. As we enter this new era of extra-human intelligence, it’s becoming apparent that many people develop an intense relationship with their device, often with profound effects on their sense of identity. These effects, though still little studied, are emerging as crucial to a treatment’s success.

The human brain is a small electrical device of super-galactic complexity. It contains an estimated hundred billion neurons, with many more links between them than there are stars in the Milky Way. Each neuron works by passing an electrical charge along its length, causing neurotransmitters to leap to the next neuron, which ignites in turn, usually in concert with many thousands of others. Somehow, human intelligence emerges from this constant, thrilling choreography. How it happens remains an almost total mystery, but it has become clear that neural technologies will be able to synch with the brain only if they learn the steps of this dance.

(cut)

For the great majority of patients, deep-brain stimulation was beneficial and life-changing, but there were occasional reports of strange behavioral reactions, such as hypomania and hypersexuality. Then, in 2006, a French team published a study about the unexpected consequences of otherwise successful implantations. Two years after a brain implant, sixty-five per cent of patients had a breakdown in their marriages or relationships, and sixty-four per cent wanted to leave their careers. Their intellect and their levels of anxiety and depression were the same as before, or, in the case of anxiety, had even improved, but they seemed to experience a fundamental estrangement from themselves. One felt like an electronic doll. Another said he felt like RoboCop, under remote control.

Gilbert describes himself as “an applied eliminativist.” He doesn’t believe in a soul, or a mind, at least as we normally think of them, and he strongly questions whether there is a thing you could call a self. He suspected that people whose marriages broke down had built their identities and their relationships around their pathologies. When those were removed, the relationships no longer worked. Gilbert began to interview patients. He used standardized questionnaires, a procedure that is methodologically vital for making dependable comparisons, but soon he came to feel that something about this unprecedented human experience was lost when individual stories were left out. The effects he was studying were inextricable from his subjects’ identities, even though those identities changed.

Many people reported that the person they were after treatment was entirely different from the one they’d been when they had only dreamed of relief from their symptoms. Some experienced an uncharacteristic buoyancy and confidence. One woman felt fifteen years younger and tried to lift a pool table, rupturing a disk in her back. One man noticed that his newfound confidence was making life hard for his wife; he was too “full-on.” Another woman became impulsive, walking ten kilometres to a psychologist’s appointment nine days after her surgery. She was unrecognizable to her family. They told her that they grieved for the old her.

Saturday, May 1, 2021

Could you hate a robot? And does it matter if you could?

Ryland, H. 
AI & Soc (2021).
https://doi.org/10.1007/s00146-021-01173-5

Abstract

This article defends two claims. First, humans could be in relationships characterised by hate with some robots. Second, it matters that humans could hate robots, as this hate could wrong the robots (by leaving them at risk of mistreatment, exploitation, etc.). In defending this second claim, I will thus be accepting that morally considerable robots either currently exist, or will exist in the near future, and so it can matter (morally speaking) how we treat these robots. The arguments presented in this article make an important original contribution to the robo-philosophy literature, and particularly the literature on human–robot relationships (which typically only consider positive relationship types, e.g., love, friendship, etc.). Additionally, as explained at the end of the article, my discussions of robot hate could also have notable consequences for the emerging robot rights movement. Specifically, I argue that understanding human–robot relationships characterised by hate could actually help theorists argue for the rights of robots.

Conclusion

This article has argued for two claims. First, humans could be in relationships characterised by hate with morally considerable robots. Second, it matters that humans could hate these robots. This is at least partly because such hateful relations could have long-term negative effects for the robot (e.g., by encouraging bad will towards the robots). The article ended by explaining how discussions of human–robot relationships characterised by hate are connected to discussions of robot rights. I argued that the conditions for a robot being an object of hate and for having rights are the same—being sufficiently person-like. I then suggested how my discussions of human–robot relationships characterised by hate could be used to support, rather than undermine, the robot rights movement.

Friday, April 30, 2021

Experimental Philosophy of Technology

Steven Kraaijeveld
Philosophy & Technology

Abstract 

Experimental philosophy is a relatively recent discipline that employs experimental methods to investigate the intuitions, concepts, and assumptions behind traditional philosophical arguments, problems, and theories. While experimental philosophy initially served to interrogate the role that intuitions play in philosophy, it has since branched out to bring empirical methods to bear on problems within a variety of traditional areas of philosophy—including metaphysics, philosophy of language, philosophy of mind, and epistemology. To date, no connection has been made between developments in experimental philosophy and philosophy of technology. In this paper, I develop and defend a research program for an experimental philosophy of technology.

Conclusion 

The field of experimental philosophy has, through its engagement with experimental methods, become an important means of obtaining knowledge about the intuitions, concepts, and assumptions that lie behind philosophical arguments, problems, and theories across a wide variety of philosophical disciplines. In this paper, I have extended this burgeoning research program to philosophy of technology, providing both a general outline of how an experimental philosophy of technology might look and a more specific methodology and set of programs that engages with research already being conducted in the field. I have responded to potential objections to an experimental philosophy of technology and I have argued for a number of unique strengths of the approach. Aside from engaging with work that already involves intuitions in techno-philosophical research, a booming experimental philosophy of technology research program can offer a unifying methodology for a diverse set of subfields, a way of generating knowledge across disciplines without necessarily requiring specialized knowledge; and, at the very least, it can make those working in philosophy of technology—and those in society who engage with technology, which is all of us—more mindful of the intuitions about technology that we may, rightly or wrongly, hold.

Wednesday, March 10, 2021

Thought-detection: AI has infiltrated our last bastion of privacy

Gary Grossman
VentureBeat
Originally posted 13 Feb 21

Our thoughts are private – or at least they were. New breakthroughs in neuroscience and artificial intelligence are changing that assumption, while at the same time inviting new questions around ethics, privacy, and the horizons of brain/computer interaction.

Research published last week from Queen Mary University in London describes an application of a deep neural network that can determine a person’s emotional state by analyzing wireless signals that are used like radar. In this research, participants in the study watched a video while radio signals were sent towards them and measured when they bounced back. Analysis of body movements revealed “hidden” information about an individual’s heart and breathing rates. From these findings, the algorithm can determine one of four basic emotion types: anger, sadness, joy, and pleasure. The researchers proposed this work could help with the management of health and wellbeing and be used to perform tasks like detecting depressive states.

Ahsan Noor Khan, a PhD student and first author of the study, said: “We’re now looking to investigate how we could use low-cost existing systems, such as Wi-Fi routers, to detect emotions of a large number of people gathered, for instance in an office or work environment.” Among other things, this could be useful for HR departments to assess how new policies introduced in a meeting are being received, regardless of what the recipients might say. Outside of an office, police could use this technology to look for emotional changes in a crowd that might lead to violence.

The research team plans to examine public acceptance and ethical concerns around the use of this technology. Such concerns would not be surprising and conjure up a very Orwellian idea of the ‘thought police’ from 1984. In this novel, the thought police watchers are expert at reading people’s faces to ferret out beliefs unsanctioned by the state, though they never mastered learning exactly what a person was thinking.


Tuesday, January 12, 2021

Is that artificial intelligence ethical? Sony to review all products

NikkeiAsia
Nikkei staff writers
Originally posted 22 Dec 2020

Here is an excerpt:

Sony will start screening all of its AI-infused products for ethical risks as early as spring, Nikkei has learned. If a product is deemed ethically deficient, the company will improve it or halt development.

Sony uses AI in its latest generation of the Aibo robotic dog, for instance, which can recognize up to 100 faces and continues to learn through the cloud.

Sony will incorporate AI ethics into its quality control, using internal guidelines.

The company will review artificially intelligent products from development to post-launch on such criteria as privacy protection. Ethically deficient offerings will be modified or dropped.

An AI Ethics Committee, with its head appointed by the CEO, will have the power to halt development on products with issues.

Even products well into development could still be dropped. Ones already sold could be recalled if problems are found. The company plans to gradually broaden the AI ethics rules to offerings in finance and entertainment as well.

As AI finds its way into more devices, the responsibilities of developers are increasing, and companies are strengthening ethical guidelines.

Thursday, December 24, 2020

Google Employees Call Black Scientist's Ouster 'Unprecedented Research Censorship'

Bobby Allyn
www.npr.org
Originally published 3 Dec 20

Hundreds of Google employees have published an open letter following the firing of an accomplished scientist known for her research into the ethics of artificial intelligence and her work showing racial bias in facial recognition technology.

That scientist, Timnit Gebru, helped lead Google's Ethical Artificial Intelligence Team until Tuesday.

Gebru, who is Black, says she was forced out of the company after a dispute over a research paper and an email she subsequently sent to peers expressing frustration over how the tech giant treats employees of color and women.

"Instead of being embraced by Google as an exceptionally talented and prolific contributor, Dr. Gebru has faced defensiveness, racism, gaslighting, research censorship, and now a retaliatory firing," the open letter said. By Thursday evening, more than 400 Google employees and hundreds of outsiders — many of them academics — had signed it.

The research paper in question was co-authored by Gebru along with four others at Google and two other researchers. It examined the environmental and ethical implications of an AI tool used by Google and other technology companies, according to NPR's review of the draft paper.

The 12-page draft explored the possible pitfalls of relying on the tool, which scans massive amounts of information on the Internet and produces text as if written by a human. The paper argued it could end up mimicking hate speech and other types of derogatory and biased language found online. The paper also cautioned against the energy cost of using such large-scale AI models.

According to Gebru, she was planning to present the paper at a research conference next year, but then her bosses at Google stepped in and demanded she retract the paper or remove all the Google employees as authors.

Friday, October 23, 2020

Ethical Dimensions of Using Artificial Intelligence in Health Care

Michael J. Rigby
AMA Journal of Ethics
February 2019

An artificially intelligent computer program can now diagnose skin cancer more accurately than a board-certified dermatologist. Better yet, the program can do it faster and more efficiently, requiring a training data set rather than a decade of expensive and labor-intensive medical education. While it might appear that it is only a matter of time before physicians are rendered obsolete by this type of technology, a closer look at the role this technology can play in the delivery of health care is warranted to appreciate its current strengths, limitations, and ethical complexities.

Artificial intelligence (AI), which includes the fields of machine learning, natural language processing, and robotics, can be applied to almost any field in medicine, and its potential contributions to biomedical research, medical education, and delivery of health care seem limitless. With its robust ability to integrate and learn from large sets of clinical data, AI can serve roles in diagnosis, clinical decision making, and personalized medicine. For example, AI-based diagnostic algorithms applied to mammograms are assisting in the detection of breast cancer, serving as a “second opinion” for radiologists. In addition, advanced virtual human avatars are capable of engaging in meaningful conversations, which has implications for the diagnosis and treatment of psychiatric disease. AI applications also extend into the physical realm with robotic prostheses, physical task support systems, and mobile manipulators assisting in the delivery of telemedicine.

Nonetheless, this powerful technology creates a novel set of ethical challenges that must be identified and mitigated since AI technology has tremendous capability to threaten patient preference, safety, and privacy. However, current policy and ethical guidelines for AI technology are lagging behind the progress AI has made in the health care field. While some efforts to engage in these ethical conversations have emerged, the medical community remains ill informed of the ethical complexities that budding AI technology can introduce. Accordingly, a rich discussion awaits that would greatly benefit from physician input, as physicians will likely be interfacing with AI in their daily practice in the near future.

Friday, October 9, 2020

AI ethics groups are repeating one of society’s classic mistakes

Abhishek Gupta aand Victoria Heath
MIT Technology Review
Originally published 14 September 20

Here is an excerpt:

Unfortunately, as it stands today, the entire field of AI ethics is at grave risk of limiting itself to languages, ideas, theories, and challenges from a handful of regions—primarily North America, Western Europe, and East Asia.

This lack of regional diversity reflects the current concentration of AI research (pdf): 86% of papers published at AI conferences in 2018 were attributed to authors in East Asia, North America, or Europe. And fewer than 10% of references listed in AI papers published in these regions are to papers from another region. Patents are also highly concentrated: 51% of AI patents published in 2018 were attributed to North America.

Those of us working in AI ethics will do more harm than good if we allow the field’s lack of geographic diversity to define our own efforts. If we’re not careful, we could wind up codifying AI’s historic biases into guidelines that warp the technology for generations to come. We must start to prioritize voices from low- and middle-income countries (especially those in the “Global South”) and those from historically marginalized communities.

Advances in technology have often benefited the West while exacerbating economic inequality, political oppression, and environmental destruction elsewhere. Including non-Western countries in AI ethics is the best way to avoid repeating this pattern.

The good news is there are many experts and leaders from underrepresented regions to include in such advisory groups. However, many international organizations seem not to be trying very hard to solicit participation from these people. The newly formed Global AI Ethics Consortium, for example, has no founding members representing academic institutions or research centers from the Middle East, Africa, or Latin America. This omission is a stark example of colonial patterns (pdf) repeating themselves.

Saturday, September 19, 2020

Don’t ask if artificial intelligence is good or fair, ask how it shifts power

Pratyusha Kalluri
nature.com
Originally posted 7 July 20

Here is an excerpt:

Researchers in AI overwhelmingly focus on providing highly accurate information to decision makers. Remarkably little research focuses on serving data subjects. What’s needed are ways for these people to investigate AI, to contest it, to influence it or to even dismantle it. For example, the advocacy group Our Data Bodies is putting forward ways to protect personal data when interacting with US fair-housing and child-protection services. Such work gets little attention. Meanwhile, mainstream research is creating systems that are extraordinarily expensive to train, further empowering already powerful institutions, from Amazon, Google and Facebook to domestic surveillance and military programmes.

Many researchers have trouble seeing their intellectual work with AI as furthering inequity. Researchers such as me spend our days working on what are, to us, mathematically beautiful and useful systems, and hearing of AI success stories, such as winning Go championships or showing promise in detecting cancer. It is our responsibility to recognize our skewed perspective and listen to those impacted by AI.

Through the lens of power, it’s possible to see why accurate, generalizable and efficient AI systems are not good for everyone. In the hands of exploitative companies or oppressive law enforcement, a more accurate facial recognition system is harmful. Organizations have responded with pledges to design ‘fair’ and ‘transparent’ systems, but fair and transparent according to whom? These systems sometimes mitigate harm, but are controlled by powerful institutions with their own agendas. At best, they are unreliable; at worst, they masquerade as ‘ethics-washing’ technologies that still perpetuate inequity.

Already, some researchers are exposing hidden limitations and failures of systems. They braid their research findings with advocacy for AI regulation. Their work includes critiquing inadequate technological ‘fixes’. Other researchers are explaining to the public how natural resources, data and human labour are extracted to create AI.

The info is here.

Monday, September 7, 2020

From sex robots to love robots: is mutual love with a robot possible?

S.R. Nyholm and L.E. Frank
Philosophy & Ethics

Some critics of sex-robots worry that their use might spread objectifying attitudes about sex, and common sense places a higher value on sex within love-relationships than on casual sex. If there could be mutual love between humans and sex-robots, this could help to ease the worries about objectifying attitudes. And mutual love between humans and sex-robots, if possible, could also help to make this sex more valuable. But is mutual love between humans and robots possible, or even conceivable? We discuss three clusters of ideas and associations commonly discussed within the philosophy of love, and relate these to the topic of whether mutual love could be achieved between humans and sex-robots: (i) the idea of love as a “good match”; (ii) the idea of valuing each other in our distinctive particularity; and (iii) the idea of a steadfast commitment. We consider relations among these ideas and the sort of agency and free will that we attribute to human romantic partners. Our conclusion is that mutual love between humans and advanced sex-robots is not an altogether impossible proposition. However, it is unlikely that we will be able to create robots sophisticated enough to be able to participate in love-relationships anytime soon.

From the Conclusion:

As with the development of any new technology that has the potential to be socially disruptive, we urge caution and careful ethical examination prior to and continuing through the research-and-development process. The consequences and techno-moral change that will potentially accompany the advancement of robots that can love and be loved is very difficult to predict. But a “no” answer to the question of whether we should invest in the creation of love robots should not be based on mere conservatism with respect to love relationships, unjustified preference for the natural over the artificial,  or an unsupported fear of the potential risks. Any such answer, in our view, should rather be based on an “opportunity cost” argument: that is, if it can be shown that the time, energy, and resources could be better spent on other, more easily attain-able endeavors, then those other projects should perhaps be favored over something as relatively far-fetched as sex robots advanced enough to participate in relationships of mutual love along the lines described in the previous sections.

A pdf can be downloaded here.

Friday, August 7, 2020

Technology Can Help Us, but People Must Be the Moral Decision Makers

Image for postAndrew Briggs
medium.com
Originally posted 8 June 20

Here is an excerpt:

Many individuals in technology fields see tools such as machine learning and AI as precisely that — tools — which are intended to be used to support human endeavors, and they tend to argue how such tools can be used to optimize technical decisions. Those people concerned with the social impacts of these technologies tend to approach the debate from a moral stance and to ask how these technologies should be used to promote human flourishing.

This is not an unresolvable conflict, nor is it purely academic. As the world grapples with the coronavirus pandemic, society is increasingly faced with decisions about how technology should be used: Should sick people’s contacts be traced using cell phone data? Should AIs determine who can or cannot work or travel based on their most recent COVID-19 test results? These questions have both technical and moral dimensions. Thankfully, humans have a unique capacity for moral choices in a way that machines simply do not.

One of our findings is that for humanity to thrive in the new digital age, we cannot disconnect our technical decisions and innovations from moral reasoning. New technologies require innovations in society. To think that the advance of technology can be stopped, or that established moral modalities need not be applied afresh to new circumstances, is a fraught path. There will often be tradeoffs between social goals, such as maintaining privacy, and technological goals, such as identifying disease vectors.

The info is here.

Tuesday, April 21, 2020

When Google and Apple get privacy right, is there still something wrong?

Tamar Sharon
Medium.com
Originally posted 15 April 20

Here is an excerpt:

As the understanding that we are in this for the long run settles in, the world is increasingly turning its attention to technological solutions to address the devastating COVID-19 virus. Contact-tracing apps in particular seem to hold much promise. Using Bluetooth technology to communicate between users’ smartphones, these apps could map contacts between infected individuals and alert people who have been in proximity to an infected person. Some countries, including China, Singapore, South Korea and Israel, have deployed these early on. Health authorities in the UK, France, Germany, the Netherlands, Iceland, the US and other countries, are currently considering implementing such apps as a means of easing lock-down measures.

There are some bottlenecks. Do they work? The effectiveness of these applications has not been evaluated — in isolation or as part of an integrated strategy. How many people would need to use them? Not everyone has a smartphone. Even in rich countries, the most vulnerable group, aged over 80, is least likely to have one. Then there’s the question about fundamental rights and liberties, first and foremost privacy and data protection. Will contact-tracing become part of a permanent surveillance structure in the prolonged “state of exception” we are sleep-walking into?

Prompted by public discussions about this last concern, a number of European governments have indicated the need to develop such apps in a way that would be privacy preserving, while independent efforts involving technologists and scientists to deliver privacy-centric solutions have been cropping up. The Pan-European Privacy-Preserving Tracing Initiative (PEPP-IT), and in particular the Decentralised Privacy-Preserving Proximity Tracing (DP-3T) protocol, which provides an outline for a decentralised system, are notable forerunners. Somewhat late in the game, the European Commission last week issued a Recommendation for a pan-European approach to the adoption of contact-tracing apps that would respect fundamental rights such as privacy and data protection.

The info is here.

Tuesday, April 14, 2020

New Data Rules Could Empower Patients but Undermine Their Privacy

Natasha Singer
The New York Times
Originally posted 9 March 20

Here is an excerpt:

The Department of Health and Human Services said the new system was intended to make it as easy for people to manage their health care on smartphones as it is for them to use apps to manage their finances.

Giving people access to their medical records via mobile apps is a major milestone for patient rights, even as it may heighten risks to patient privacy.

Prominent organizations like the American Medical Association have warned that, without accompanying federal safeguards, the new rules could expose people who share their diagnoses and other intimate medical details with consumer apps to serious data abuses.

Although Americans have had the legal right to obtain a copy of their personal health information for two decades, many people face obstacles in getting that data from providers.

Some physicians still require patients to pick up computer disks — or even photocopies — of their records in person. Some medical centers use online portals that offer access to basic health data, like immunizations, but often do not include information like doctors’ consultation notes that might help patients better understand their conditions and track their progress.

The new rules are intended to shift that power imbalance toward the patient.

The info is here.

Wednesday, February 26, 2020

Ethical and Legal Aspects of Ambient Intelligence in Hospitals

Gerke S, Yeung S, Cohen IG.
JAMA. Published online January 24, 2020.
doi:10.1001/jama.2019.21699

Ambient intelligence in hospitals is an emerging form of technology characterized by a constant awareness of activity in designated physical spaces and of the use of that awareness to assist health care workers such as physicians and nurses in delivering quality care. Recently, advances in artificial intelligence (AI) and, in particular, computer vision, the domain of AI focused on machine interpretation of visual data, have propelled broad classes of ambient intelligence applications based on continuous video capture.

One important goal is for computer vision-driven ambient intelligence to serve as a constant and fatigue-free observer at the patient bedside, monitoring for deviations from intended bedside practices, such as reliable hand hygiene and central line insertions. While early studies took place in single patient rooms, more recent work has demonstrated ambient intelligence systems that can detect patient mobilization activities across 7 rooms in an ICU ward and detect hand hygiene activity across 2 wards in 2 hospitals.

As computer vision–driven ambient intelligence accelerates toward a future when its capabilities will most likely be widely adopted in hospitals, it also raises new ethical and legal questions. Although some of these concerns are familiar from other health surveillance technologies, what is distinct about ambient intelligence is that the technology not only captures video data as many surveillance systems do but does so by targeting the physical spaces where sensitive patient care activities take place, and furthermore interprets the video such that the behaviors of patients, health care workers, and visitors may be constantly analyzed. This Viewpoint focuses on 3 specific concerns: (1) privacy and reidentification risk, (2) consent, and (3) liability.

The info is here.

Saturday, February 22, 2020

Hospitals Give Tech Giants Access to Detailed Medical Records

Melanie Evans
The Wall Street Journal
Originally published 20 Jan 20

Here is an excerpt:

Recent revelations that Alphabet Inc.’s Google is able to tap personally identifiable medical data about patients, reported by The Wall Street Journal, has raised concerns among lawmakers, patients and doctors about privacy.

The Journal also recently reported that Google has access to more records than first disclosed in a deal with the Mayo Clinic.

Mayo officials say the deal allows the Rochester, Minn., hospital system to share personal information, though it has no current plans to do so.

“It was not our intention to mislead the public,” said Cris Ross, Mayo’s chief information officer.

Dr. David Feinberg, head of Google Health, said Google is one of many companies with hospital agreements that allow the sharing of personally identifiable medical data to test products used in treatment and operations.

(cut)

Amazon, Google, IBM and Microsoft are vying for hospitals’ business in the cloud storage market in part by offering algorithms and technology features. To create and launch algorithms, tech companies are striking separate deals for access to medical-record data for research, development and product pilots.

The Health Insurance Portability and Accountability Act, or HIPAA, lets hospitals confidentially send data to business partners related to health insurance, medical devices and other services.

The law requires hospitals to notify patients about health-data uses, but they don’t have to ask for permission.

Data that can identify patients—including name and Social Security number—can’t be shared unless such records are needed for treatment, payment or hospital operations. Deals with tech companies to develop apps and algorithms can fall under these broad umbrellas. Hospitals aren’t required to notify patients of specific deals.

The info is here.

Thursday, February 20, 2020

Sharing Patient Data Without Exploiting Patients

McCoy MS, Joffe S, Emanuel EJ.
JAMA. Published online January 16, 2020.
doi:10.1001/jama.2019.22354

Here is an excerpt:

The Risks of Data Sharing

When health systems share patient data, the primary risk to patients is the exposure of their personal health information, which can result in a range of harms including embarrassment, stigma, and discrimination. Such exposure is most obvious when health systems fail to remove identifying information before sharing data, as is alleged in the lawsuit against Google and the University of Chicago. But even when shared data are fully deidentified in accordance with the requirements of the Health Insurance Portability and Accountability Act reidentification is possible, especially when patient data are linked with other data sets. Indeed, even new data privacy laws such as Europe's General Data Protection Regulation and California's Consumer Privacy Act do not eliminate reidentification risk.

Companies that acquire patient data also accept risk by investing in research and development that may not result in marketable products. This risk is less ethically concerning, however, than that borne by patients. While companies usually can abandon unpromising ventures, patients’ lack of control over data-sharing arrangements makes them vulnerable to exploitation. Patients lack control, first, because they may have no option other than to seek care in a health system that plans to share their data. Second, even if patients are able to authorize sharing of their data, they are rarely given the information and opportunity to ask questions needed to give meaningful informed consent to future uses of their data.

Thus, for the foreseeable future, data sharing will entail ethically concerning risks to patients whose data are shared. But whether these exchanges are exploitative depends on how much benefit patients receive from data sharing.

The info is here.

Wednesday, February 5, 2020

A Reality Check On Artificial Intelligence: Are Health Care Claims Overblown?

Liz Szabo
Kaiser Health News
Originally published 30 Dec 19

Here is an excerpt:

“Almost none of the [AI] stuff marketed to patients really works,” said Dr. Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices ― such as ones that help people count their daily steps ― need less scrutiny than ones that diagnose or treat disease.

Some software developers don’t bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. “It’s not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal,” said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academy’s report. “That’s not how the U.S. economy works.”

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

The info is here.