Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Privacy. Show all posts
Showing posts with label Privacy. Show all posts

Wednesday, June 23, 2021

Experimental Regulations for AI: Sandboxes for Morals and Mores

Ranchordas, Sofia
Morals and Machines (vol.1, 2021)
Available at SSRN: 

Abstract

Recent EU legislative and policy initiatives aim to offer flexible, innovation-friendly, and future-proof regulatory frameworks. Key examples are the EU Coordinated Plan on AI and the recently published EU AI Regulation Proposal which refer to the importance of experimenting with regulatory sandboxes so as to balance innovation in AI against its potential risks. Originally developed in the Fintech sector, regulatory sandboxes create a testbed for a selected number of innovative projects, by waiving otherwise applicable rules, guiding compliance, or customizing enforcement. Despite the burgeoning literature on regulatory sandboxes and the regulation of AI, the legal, methodological, and ethical challenges of regulatory sandboxes have remained understudied. This exploratory article delves into the some of the benefits and intricacies of employing experimental legal instruments in the context of the regulation of AI. This article’s contribution is twofold: first, it contextualizes the adoption of regulatory sandboxes in the broader discussion on experimental approaches to regulation; second, it offers a reflection on the steps ahead for the design and implementation of AI regulatory sandboxes.

(cut)

In conclusion, AI regulatory sandboxes are not the answer to more innovation in AI. They are part of the path to a more forward-looking approach to the interaction between law and technology. This new approach will most certainly be welcomed with reluctance in years to come as it disrupts existing dogmas pertaining to the way in which we conceive the principle of legal certainty and the reactive—rather than anticipatory—nature of law. However, traditional law and regulation were designed with human agents and enigmas in mind. Many of the problems generated by AI (discrimination, power asymmetries, and manipulation) are still human but their scale and potential for harms (and benefits) have long ceased to be. It is thus time to rethink our fundamental approach to regulation and refocus on the new regulatory subject before us.

Wednesday, March 10, 2021

Thought-detection: AI has infiltrated our last bastion of privacy

Gary Grossman
VentureBeat
Originally posted 13 Feb 21

Our thoughts are private – or at least they were. New breakthroughs in neuroscience and artificial intelligence are changing that assumption, while at the same time inviting new questions around ethics, privacy, and the horizons of brain/computer interaction.

Research published last week from Queen Mary University in London describes an application of a deep neural network that can determine a person’s emotional state by analyzing wireless signals that are used like radar. In this research, participants in the study watched a video while radio signals were sent towards them and measured when they bounced back. Analysis of body movements revealed “hidden” information about an individual’s heart and breathing rates. From these findings, the algorithm can determine one of four basic emotion types: anger, sadness, joy, and pleasure. The researchers proposed this work could help with the management of health and wellbeing and be used to perform tasks like detecting depressive states.

Ahsan Noor Khan, a PhD student and first author of the study, said: “We’re now looking to investigate how we could use low-cost existing systems, such as Wi-Fi routers, to detect emotions of a large number of people gathered, for instance in an office or work environment.” Among other things, this could be useful for HR departments to assess how new policies introduced in a meeting are being received, regardless of what the recipients might say. Outside of an office, police could use this technology to look for emotional changes in a crowd that might lead to violence.

The research team plans to examine public acceptance and ethical concerns around the use of this technology. Such concerns would not be surprising and conjure up a very Orwellian idea of the ‘thought police’ from 1984. In this novel, the thought police watchers are expert at reading people’s faces to ferret out beliefs unsanctioned by the state, though they never mastered learning exactly what a person was thinking.


Wednesday, January 27, 2021

What One Health System Learned About Providing Digital Services in the Pandemic

Marc Harrison
Harvard Business Review
Originally posted 11 Dec 20

Here are two excerpts:

Lesson 2: Digital care is safer during the pandemic.

A patient who’s tested positive for Covid doesn’t have to go see her doctor or go into an urgent care clinic to discuss her symptoms. Doctors and other caregivers who are providing virtual care for hospitalized Covid patients don’t face increased risk of exposure. They also don’t have to put on personal protective equipment, step into the patient’s room, then step outside and take off their PPE. We need those supplies, and telehealth helps us preserve it.

Intermountain Healthcare’s virtual hospital is especially well-suited for Covid patients. It works like this: In a regular hospital, you come into the ER, and we check you out and think you’re probably going to be okay, but you’re sick enough that we want to monitor you. So, we admit you.

With our virtual hospital — which uses a combination of telemedicine, home health, and remote patient monitoring — we send you home with a technology kit that allows us to check how you’re doing. You’ll be cared for by a virtual team, including a hospitalist who monitors your vital signs around the clock and home health nurses who do routine rounding. That’s working really well: Our clinical outcomes are excellent, our satisfaction scores are through the roof, and it’s less expensive. Plus, it frees up the hospital beds and staff we need to treat our sickest Covid patients.

(cut)

Lesson 4: Digital tools support the direction health care is headed.

Telehealth supports value-based care, in which hospitals and other care providers are paid based on the health outcomes of their patients, not on the amount of care they provide. The result is a greater emphasis on preventive care — which reduces unsustainable health care costs.

Intermountain serves a large population of at-risk, pre-paid consumers, and the more they use telehealth, the easier it is for them to stay healthy — which reduces costs for them and for us. The pandemic has forced payment systems, including the government’s, to keep up by expanding reimbursements for telehealth services.

This is worth emphasizing: If we can deliver care in lower-cost settings, we can reduce the cost of care. Some examples:
  • The average cost of a virtual encounter at Intermountain is $367 less than the cost of a visit to an urgent care clinic, physician’s office, or emergency department (ED).
  • Our virtual newborn ICU has helped us reduce the number of transports to our large hospitals by 65 a year since 2015. Not counting the clinical and personal benefits, that’s saved $350,000 per year in transportation costs.
  • Our internal study of 150 patients in one rural Utah town showed each patient saved an average of $2,000 in driving expenses and lost wages over a year’s time because he or she was able to receive telehealth care close to home. We also avoided pumping 106,460 kilograms of CO2 into the environment — and (per the following point) the town’s 24-bed hospital earned $1.6 million that otherwise would have shifted to a larger hospital in a bigger town.

Tuesday, January 12, 2021

Is that artificial intelligence ethical? Sony to review all products

NikkeiAsia
Nikkei staff writers
Originally posted 22 Dec 2020

Here is an excerpt:

Sony will start screening all of its AI-infused products for ethical risks as early as spring, Nikkei has learned. If a product is deemed ethically deficient, the company will improve it or halt development.

Sony uses AI in its latest generation of the Aibo robotic dog, for instance, which can recognize up to 100 faces and continues to learn through the cloud.

Sony will incorporate AI ethics into its quality control, using internal guidelines.

The company will review artificially intelligent products from development to post-launch on such criteria as privacy protection. Ethically deficient offerings will be modified or dropped.

An AI Ethics Committee, with its head appointed by the CEO, will have the power to halt development on products with issues.

Even products well into development could still be dropped. Ones already sold could be recalled if problems are found. The company plans to gradually broaden the AI ethics rules to offerings in finance and entertainment as well.

As AI finds its way into more devices, the responsibilities of developers are increasing, and companies are strengthening ethical guidelines.

Thursday, January 7, 2021

How Might Artificial Intelligence Applications Impact Risk Management?

John Banja
AMA J Ethics. 2020;22(11):E945-951. 

Abstract

Artificial intelligence (AI) applications have attracted considerable ethical attention for good reasons. Although AI models might advance human welfare in unprecedented ways, progress will not occur without substantial risks. This article considers 3 such risks: system malfunctions, privacy protections, and consent to data repurposing. To meet these challenges, traditional risk managers will likely need to collaborate intensively with computer scientists, bioinformaticists, information technologists, and data privacy and security experts. This essay will speculate on the degree to which these AI risks might be embraced or dismissed by risk management. In any event, it seems that integration of AI models into health care operations will almost certainly introduce, if not new forms of risk, then a dramatically heightened magnitude of risk that will have to be managed.

AI Risks in Health Care

Artificial intelligence (AI) applications in health care have attracted enormous attention as well as immense public and private sector investment in the last few years.1 The anticipation is that AI technologies will dramatically alter—perhaps overhaul—health care practices and delivery. At the very least, hospitals and clinics will likely begin importing numerous AI models, especially “deep learning” varieties that draw on aggregate data, over the next decade.

A great deal of the ethics literature on AI has recently focused on the accuracy and fairness of algorithms, worries over privacy and confidentiality, “black box” decisional unexplainability, concerns over “big data” on which deep learning AI models depend, AI literacy, and the like. Although some of these risks, such as security breaches of medical records, have been around for some time, their materialization in AI applications will likely present large-scale privacy and confidentiality risks. AI models have already posed enormous challenges to hospitals and facilities by way of cyberattacks on protected health information, and they will introduce new ethical obligations for providers who might wish to share patient data or sell it to others. Because AI models are themselves dependent on hardware, software, algorithmic development and accuracy, implementation, data sharing and storage, continuous upgrading, and the like, risk management will find itself confronted with a new panoply of liability risks. On the one hand, risk management can choose to address these new risks by developing mitigation strategies. On the other hand, because these AI risks present a novel landscape of risk that might be quite unfamiliar, risk management might choose to leave certain of those challenges to others. This essay will discuss this “approach-avoidance” possibility in connection with 3 categories of risk—system malfunctions, privacy breaches, and consent to data repurposing—and conclude with some speculations on how those decisions might play out.

Thursday, December 24, 2020

Google Employees Call Black Scientist's Ouster 'Unprecedented Research Censorship'

Bobby Allyn
www.npr.org
Originally published 3 Dec 20

Hundreds of Google employees have published an open letter following the firing of an accomplished scientist known for her research into the ethics of artificial intelligence and her work showing racial bias in facial recognition technology.

That scientist, Timnit Gebru, helped lead Google's Ethical Artificial Intelligence Team until Tuesday.

Gebru, who is Black, says she was forced out of the company after a dispute over a research paper and an email she subsequently sent to peers expressing frustration over how the tech giant treats employees of color and women.

"Instead of being embraced by Google as an exceptionally talented and prolific contributor, Dr. Gebru has faced defensiveness, racism, gaslighting, research censorship, and now a retaliatory firing," the open letter said. By Thursday evening, more than 400 Google employees and hundreds of outsiders — many of them academics — had signed it.

The research paper in question was co-authored by Gebru along with four others at Google and two other researchers. It examined the environmental and ethical implications of an AI tool used by Google and other technology companies, according to NPR's review of the draft paper.

The 12-page draft explored the possible pitfalls of relying on the tool, which scans massive amounts of information on the Internet and produces text as if written by a human. The paper argued it could end up mimicking hate speech and other types of derogatory and biased language found online. The paper also cautioned against the energy cost of using such large-scale AI models.

According to Gebru, she was planning to present the paper at a research conference next year, but then her bosses at Google stepped in and demanded she retract the paper or remove all the Google employees as authors.

Thursday, December 17, 2020

AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust

Capgemini Research Institute

In the wake of the COVID-19 crisis, our reliance on AI has skyrocketed. Today more than ever before, we look to AI to help us limit physical interactions, predict the next wave of the pandemic, disinfect our healthcare facilities and even deliver our food. But can we trust it?

In the latest report from the Capgemini Research Institute – AI and the Ethical Conundrum: How organizations can build ethically robust AI systems and gain trust – we surveyed over 800 organizations and 2,900 consumers to get a picture of the state of ethics in AI today. We wanted to understand what organizations can do to move to AI systems that are ethical by design, how they can benefit from doing so, and the consequences if they don’t. We found that while customers are becoming more trusting of AI-enabled interactions, organizations’ progress in ethical dimensions is underwhelming. And this is dangerous because once violated, trust can be difficult to rebuild.

Ethically sound AI requires a strong foundation of leadership, governance, and internal practices around audits, training, and operationalization of ethics. Building on this foundation, organizations have to:
  1. Clearly outline the intended purpose of AI systems and assess their overall potential impact
  2. Proactively deploy AI to achieve sustainability goals
  3. Embed diversity and inclusion principles proactively throughout the lifecycle of AI systems for advancing fairness
  4. Enhance transparency with the help of technology tools, humanize the AI experience and ensure human oversight of AI systems
  5. Ensure technological robustness of AI systems
  6. Empower customers with privacy controls to put them in charge of AI interactions.
For more information on ethics in AI, download the report.

Wednesday, October 21, 2020

Neurotechnology can already read minds: so how do we protect our thoughts?

Rafael Yuste
El Pais
Originally posted 11 Sept 20

Here is an excerpt:

On account of these and other developments, a group of 25 scientific experts – clinical engineers, psychologists, lawyers, philosophers and representatives of different brain projects from all over the world – met in 2017 at Columbia University, New York, and proposed ethical rules for the use of these neurotechnologies. We believe we are facing a problem that affects human rights, since the brain generates the mind, which defines us as a species. At the end of the day, it is about our essence – our thoughts, perceptions, memories, imagination, emotions and decisions.

To protect citizens from the misuse of these technologies, we have proposed a new set of human rights, called “neurorights.” The most urgent of these to establish is the right to the privacy of our thoughts, since the technologies for reading mental activity are more developed than the technologies for manipulating it.

To defend mental privacy, we are working on a three-pronged approach. The first consists of legislating “neuroprotection.” We believe that data obtained from the brain, which we call “neurodata,” should be rigorously protected by laws similar to those applied to organ donations and transplants. We ask that “neurodata” not be traded and only be extracted with the consent of the individual for medical or scientific purposes.

This would be a preventive measure to protect against abuse. The second approach involves the proposal of proactive ideas; for example, that the companies and organizations that manufacture these technologies should adhere to a code of ethics from the outset, just as doctors do with the Hippocratic Oath. We are working on a “technocratic oath” with Xabi Uribe-Etxebarria, founder of the artificial intelligence company Sherpa.ai, and with the Catholic University of Chile.

The third approach involves engineering, and consists of developing both hardware and software so that brain “neurodata” remains private, and that it is possible to share only select information. The aim is to ensure that the most personal data never leaves the machines that are wired to our brain. One option is to systems that are already used with financial data: open-source files and blockchain technology so that we always know where it came from, and smart contracts to prevent data from getting into the wrong hands. And, of course, it will be necessary to educate the public and make sure that no device can use a person’s data unless he or she authorizes it at that specific time.

Friday, August 7, 2020

Technology Can Help Us, but People Must Be the Moral Decision Makers

Image for postAndrew Briggs
medium.com
Originally posted 8 June 20

Here is an excerpt:

Many individuals in technology fields see tools such as machine learning and AI as precisely that — tools — which are intended to be used to support human endeavors, and they tend to argue how such tools can be used to optimize technical decisions. Those people concerned with the social impacts of these technologies tend to approach the debate from a moral stance and to ask how these technologies should be used to promote human flourishing.

This is not an unresolvable conflict, nor is it purely academic. As the world grapples with the coronavirus pandemic, society is increasingly faced with decisions about how technology should be used: Should sick people’s contacts be traced using cell phone data? Should AIs determine who can or cannot work or travel based on their most recent COVID-19 test results? These questions have both technical and moral dimensions. Thankfully, humans have a unique capacity for moral choices in a way that machines simply do not.

One of our findings is that for humanity to thrive in the new digital age, we cannot disconnect our technical decisions and innovations from moral reasoning. New technologies require innovations in society. To think that the advance of technology can be stopped, or that established moral modalities need not be applied afresh to new circumstances, is a fraught path. There will often be tradeoffs between social goals, such as maintaining privacy, and technological goals, such as identifying disease vectors.

The info is here.

Monday, July 13, 2020

Amazon Halts Police Use Of Its Facial Recognition Technology

Bobby Allyn
www.npr.org
Originally posted 10 June 20

Amazon announced on Wednesday a one-year moratorium on police use of its facial-recognition technology, yielding to pressure from police-reform advocates and civil rights groups.

It is unclear how many law enforcement agencies in the U.S. deploy Amazon's artificial intelligence tool, but an official with the Washington County Sheriff's Office in Oregon confirmed that it will be suspending its use of Amazon's facial recognition technology.

Researchers have long criticized the technology for producing inaccurate results for people with darker skin. Studies have also shown that the technology can be biased against women and younger people.

IBM said earlier this week that it would quit the facial-recognition business altogether. In a letter to Congress, chief executive Arvind Krishna condemned software that is used "for mass surveillance, racial profiling, violations of basic human rights and freedoms."

And Microsoft President Brad Smith told The Washington Post during a livestream Thursday morning that his company has not been selling its technology to law enforcement. Smith said he has no plans to until there is a national law.

The info is here.

Thursday, July 2, 2020

Professional Psychology: Collection Agencies, Confidentiality, Records, Treatment, and Staff Supervision in New Jersey

SUPERIOR COURT OF NEW JERSEY
APPELLATE DIVISION
DOCKET NO. A-4975-17T3

In the Matter of the Suspension or Revocation of the License of L. Barry Helfmann, Psy.D.

Here are two excerpts:

The complaint included five counts. It alleged Dr. Helfmann failed to do the following: take reasonable measures to protect confidentiality of the Partnership's patients' private health information; maintain permanent records that accurately reflected patient contact for treatment purposes; maintain records of professional quality; timely release records requested by a patient; and properly instruct and supervise temporary staff concerning patient confidentiality and record maintenance. The Attorney General sought sanctions under the UEA.

(cut)

The regulation is clear. The doctor's argument to the contrary, that a psychologist could somehow confuse his collection attorney with a patient's authorized representative, is refuted by the regulation's plain language as well as consideration of its entire context. The doctor's argument is without sufficient merit to warrant further discussion. R. 2:11-3(e)(1)(E).

We find nothing arbitrary about the Board's rejection of Dr. Helfmann's argument that he violated no rule or regulation because he relied on the advice of counsel in providing the Partnership's collection attorney with patients' confidential information. His assertion is contrary to the sworn testimony of the collection attorney who was deposed, as distinguished from another collection attorney with whom the doctor spoke in the distant past. The latter attorney's purported statement that confidential information might be necessary to resolve a patient's outstanding fee does not consider, let alone resolve, the propriety of a psychologist releasing such information in the face of clear statutory and regulatory prohibitions.

The Board found that Dr. Helfmann, not his collection attorneys, was charged with the professional responsibility of preserving his patients' confidential information. Perhaps the doctor's argument that he relied on the advice of counsel would have had greater appeal had he asked for a legal opinion on providing confidential patient information to collection attorneys in view of the psychologist-patient privilege and a specific regulatory prohibition against doing so absent a statutory or traditional exception. That the Board found unpersuasive Dr. Helfmann's hearsay testimony about what attorneys told him years ago is hardly arbitrary and capricious, considering the Partnership's current collection attorney's testimony and Dr. Helfmann's statutory and regulatory obligations to preserve confidentiality.

The decision is here.

Wednesday, July 1, 2020

Unusual Legal Case: Small Social Circles, Boundaries, and Harm

This legal case shows how much our social circles interrelate and how easily boundaries can be violated.  If you ever believe that you are safe from boundary violations in a current, complex culture, you may want to rethink this position.  A lesson for all in this legal case.  I will excerpt a fascinating portion of this case.

Roetzel and Andres
jdsupra.com
Originally posted 10 June 20

Possible Employer Vicarious Liability For Employee’s HIPAA Violation Even When Employee Engages In Unauthorized Act

Here is the excerpt:

When the plaintiff came in for her appointment, she handed the Parkview employee a filled-out patient information sheet. The employee then spent about one-minute inputting that information onto Parkview’s electronic health record. The employee recognized the plaintiff’s name as someone who had liked a photo of the employee’s husband on his Facebook account. Suspecting that the plaintiff might have had, or was then having, an affair with her husband, the employee sent some texts to her husband relating to the fact the plaintiff was a Parkview patient. Her texts included information from the patient chart that the employee had created from the patient’s information sheet, such as the patient’s name, her position as a dispatcher, and the underlying reasons for the plaintiff’s visit to the OB/Gyn. Even though such information was not included on the chart, the employee also texted that the plaintiff was HIV-positive and had had more than fifty sexual partners. While using the husband’s phone, the husband’s sister saw the texts. The sister then reported the texts to Parkview. Upon receipt of the sister’s report, Parkview initiated an investigation into the employee’s conduct and ultimately terminated the employee. As part of that investigation, Parkview notified the plaintiff of the disclosure of her protected health information.

The info is here.

Monday, June 22, 2020

Ethics of Artificial Intelligence and Robotics

Müller, Vincent C.
The Stanford Encyclopedia of Philosophy
(Summer 2020 Edition)

1. Introduction

1.1 Background of the Field

The ethics of AI and robotics is often focused on “concerns” of various sorts, which is a typical response to new technologies. Many such concerns turn out to be rather quaint (trains are too fast for souls); some are predictably wrong when they suggest that the technology will fundamentally change humans (telephones will destroy personal communication, writing will destroy memory, video cassettes will make going out redundant); some are broadly correct but moderately relevant (digital technology will destroy industries that make photographic film, cassette tapes, or vinyl records); but some are broadly correct and deeply relevant (cars will kill children and fundamentally change the landscape). The task of an article such as this is to analyse the issues and to deflate the non-issues.

Some technologies, like nuclear power, cars, or plastics, have caused ethical and political discussion and significant policy efforts to control the trajectory these technologies, usually only once some damage is done. In addition to such “ethical concerns”, new technologies challenge current norms and conceptual systems, which is of particular interest to philosophy. Finally, once we have understood a technology in its context, we need to shape our societal response, including regulation and law. All these features also exist in the case of new AI and Robotics technologies—plus the more fundamental fear that they may end the era of human control on Earth.

The ethics of AI and robotics has seen significant press coverage in recent years, which supports related research, but also may end up undermining it: the press often talks as if the issues under discussion were just predictions of what future technology will bring, and as though we already know what would be most ethical and how to achieve that. Press coverage thus focuses on risk, security (Brundage et al. 2018, see under Other Internet Resources [hereafter OIR]), and prediction of impact (e.g., on the job market). The result is a discussion of essentially technical problems that focus on how to achieve a desired outcome. Current discussions in policy and industry are also motivated by image and public relations, where the label “ethical” is really not much more than the new “green”, perhaps used for “ethics washing”. For a problem to qualify as a problem for AI ethics would require that we do not readily know what the right thing to do is. In this sense, job loss, theft, or killing with AI is not a problem in ethics, but whether these are permissible under certain circumstances is a problem. This article focuses on the genuine problems of ethics where we do not readily know what the answers are.

The entry is here.

Sunday, May 31, 2020

The Answer to a COVID-19 Vaccine May Lie in Our Genes, But ...

Ifeoma Ajunwa & Forrest Briscoe
Scientific American
Originally posted 13 May 2020

Here is an excerpt:

Although the rationale for expanded genetic testing is obviously meant for the greater good, such testing could also bring with it a host of privacy and economic harms. In the past, genetic testing has also been associated with employment discrimination. Even before the current crisis, companies like 23andMe and Ancestry assembled and started operating their own private long-term large-scale databases of U.S. citizens’ genetic and health data. 23andMe and Ancestry recently announced they would use their databases to identify genetic factors that predict COVID-19 susceptibility.

Other companies are growing similar databases, for a range of purposes. And the NIH’s AllofUs program is constructing a genetic database, owned by the federal government, in which data from one million people will be used to study various diseases. These new developments indicate an urgent need for appropriate genetic data governance.

Leaders from the biomedical research community recently proposed a voluntary code of conduct for organizations constructing and sharing genetic databases. We believe that the public has a right to understand the risks of genetic databases and a right to have a say in how those databases will be governed. To ascertain public expectations about genetic data governance, we surveyed over two thousand (n=2,020) individuals who altogether are representative of the general U.S. population. After educating respondents about the key benefits and risks associated with DNA databases—using information from recent mainstream news reports—we asked how willing they would be to provide their DNA data for such a database.

The info is here.

Tuesday, April 21, 2020

When Google and Apple get privacy right, is there still something wrong?

Tamar Sharon
Medium.com
Originally posted 15 April 20

Here is an excerpt:

As the understanding that we are in this for the long run settles in, the world is increasingly turning its attention to technological solutions to address the devastating COVID-19 virus. Contact-tracing apps in particular seem to hold much promise. Using Bluetooth technology to communicate between users’ smartphones, these apps could map contacts between infected individuals and alert people who have been in proximity to an infected person. Some countries, including China, Singapore, South Korea and Israel, have deployed these early on. Health authorities in the UK, France, Germany, the Netherlands, Iceland, the US and other countries, are currently considering implementing such apps as a means of easing lock-down measures.

There are some bottlenecks. Do they work? The effectiveness of these applications has not been evaluated — in isolation or as part of an integrated strategy. How many people would need to use them? Not everyone has a smartphone. Even in rich countries, the most vulnerable group, aged over 80, is least likely to have one. Then there’s the question about fundamental rights and liberties, first and foremost privacy and data protection. Will contact-tracing become part of a permanent surveillance structure in the prolonged “state of exception” we are sleep-walking into?

Prompted by public discussions about this last concern, a number of European governments have indicated the need to develop such apps in a way that would be privacy preserving, while independent efforts involving technologists and scientists to deliver privacy-centric solutions have been cropping up. The Pan-European Privacy-Preserving Tracing Initiative (PEPP-IT), and in particular the Decentralised Privacy-Preserving Proximity Tracing (DP-3T) protocol, which provides an outline for a decentralised system, are notable forerunners. Somewhat late in the game, the European Commission last week issued a Recommendation for a pan-European approach to the adoption of contact-tracing apps that would respect fundamental rights such as privacy and data protection.

The info is here.

Tuesday, April 14, 2020

New Data Rules Could Empower Patients but Undermine Their Privacy

Natasha Singer
The New York Times
Originally posted 9 March 20

Here is an excerpt:

The Department of Health and Human Services said the new system was intended to make it as easy for people to manage their health care on smartphones as it is for them to use apps to manage their finances.

Giving people access to their medical records via mobile apps is a major milestone for patient rights, even as it may heighten risks to patient privacy.

Prominent organizations like the American Medical Association have warned that, without accompanying federal safeguards, the new rules could expose people who share their diagnoses and other intimate medical details with consumer apps to serious data abuses.

Although Americans have had the legal right to obtain a copy of their personal health information for two decades, many people face obstacles in getting that data from providers.

Some physicians still require patients to pick up computer disks — or even photocopies — of their records in person. Some medical centers use online portals that offer access to basic health data, like immunizations, but often do not include information like doctors’ consultation notes that might help patients better understand their conditions and track their progress.

The new rules are intended to shift that power imbalance toward the patient.

The info is here.

Monday, April 13, 2020

Lawmakers Push Again for Info on Google Collecting Patient Data

Rob Copeland
Wall Street Journal
Originally published 3 March 20

A bipartisan trio of U.S. senators pushed again for answers on Google’s controversial “Project Nightingale,” saying the search giant evaded requests for details on its far-reaching data tie-up with health giant Ascension.

The senators, in a letter Monday to St. Louis-based Ascension, said they were put off by the lack of substantive disclosure around the effort.

Project Nightingale was revealed in November in a series of Wall Street Journal articles that described Google’s then-secret engagement to collect and crunch the personal health information of millions of patients across 21 states.

Sens. Richard Blumenthal (D., Conn.), Bill Cassidy (R., La.), and Elizabeth Warren (D., Mass.) subsequently wrote to the Alphabet Inc. GOOG +1.35% unit seeking basic information about the program, including the number of patients involved, the data shared and who at Google had access.

The head of Google Health, Dr. David Feinberg, responded with a letter in December that largely stuck to generalities, according to correspondence reviewed by the Journal.

(cut)

Ascension earlier this year fired an employee who had reached out to media, lawmakers and regulators with concerns about Project Nightingale, a person familiar with the matter said. 

The employee, who described himself as a whistleblower, was told by Ascension higher-ups that he had shared information about the initiative that was intended to be secret, the person said.

Nick Ragone, a spokesman for Ascension—one of the U.S.’s largest health-care systems with 2,600 hospitals, doctors’ offices and other facilities—declined to say why the employee in question was fired. 

Thursday, April 2, 2020

Intelligence, Surveillance, and Ethics in a Pandemic

Jessica Davis
JustSecurity.org
Originally posted 31 March 20

Here is an excerpt:

It is imperative that States and their citizens question how much freedom and privacy should be sacrificed to limit the impact of this pandemic. It is also not sufficient to ask simply “if” something is legal; we should also ask whether it should be, and under what circumstances. States should consider the ethics of surveillance and intelligence, specifically whether it is justified, done under the right authority, if it can be done with intentionality and proportionality and as a last resort, and if targets of surveillance can be separated from non-targets to avoid mass surveillance. These considerations, combined with enhanced transparency and sunset clauses on the use of intelligence and surveillance techniques, can allow States to ethically deploy these powerful tools to help stop the spread of the virus.

States are employing intelligence and surveillance techniques to contain the spread of the illness because these methods can help track and identify infected or exposed people and enforce quarantines. States have used cell phone data to track people at risk of infection or transmission and financial data to identify places frequented by at-risk people. Social media intelligence is also ripe for exploitation in terms of identifying social contacts. This intelligence, is increasingly being combined with health data, creating a unique (and informative) picture of a person’s life that is undoubtedly useful for virus containment. But how long should States have access to this type of information on their citizens, if at all? Considering natural limits to the collection of granular data on citizens is imperative, both in terms of time and access to this data.

The info is here.

Wednesday, February 26, 2020

Ethical and Legal Aspects of Ambient Intelligence in Hospitals

Gerke S, Yeung S, Cohen IG.
JAMA. Published online January 24, 2020.
doi:10.1001/jama.2019.21699

Ambient intelligence in hospitals is an emerging form of technology characterized by a constant awareness of activity in designated physical spaces and of the use of that awareness to assist health care workers such as physicians and nurses in delivering quality care. Recently, advances in artificial intelligence (AI) and, in particular, computer vision, the domain of AI focused on machine interpretation of visual data, have propelled broad classes of ambient intelligence applications based on continuous video capture.

One important goal is for computer vision-driven ambient intelligence to serve as a constant and fatigue-free observer at the patient bedside, monitoring for deviations from intended bedside practices, such as reliable hand hygiene and central line insertions. While early studies took place in single patient rooms, more recent work has demonstrated ambient intelligence systems that can detect patient mobilization activities across 7 rooms in an ICU ward and detect hand hygiene activity across 2 wards in 2 hospitals.

As computer vision–driven ambient intelligence accelerates toward a future when its capabilities will most likely be widely adopted in hospitals, it also raises new ethical and legal questions. Although some of these concerns are familiar from other health surveillance technologies, what is distinct about ambient intelligence is that the technology not only captures video data as many surveillance systems do but does so by targeting the physical spaces where sensitive patient care activities take place, and furthermore interprets the video such that the behaviors of patients, health care workers, and visitors may be constantly analyzed. This Viewpoint focuses on 3 specific concerns: (1) privacy and reidentification risk, (2) consent, and (3) liability.

The info is here.

Saturday, February 22, 2020

Hospitals Give Tech Giants Access to Detailed Medical Records

Melanie Evans
The Wall Street Journal
Originally published 20 Jan 20

Here is an excerpt:

Recent revelations that Alphabet Inc.’s Google is able to tap personally identifiable medical data about patients, reported by The Wall Street Journal, has raised concerns among lawmakers, patients and doctors about privacy.

The Journal also recently reported that Google has access to more records than first disclosed in a deal with the Mayo Clinic.

Mayo officials say the deal allows the Rochester, Minn., hospital system to share personal information, though it has no current plans to do so.

“It was not our intention to mislead the public,” said Cris Ross, Mayo’s chief information officer.

Dr. David Feinberg, head of Google Health, said Google is one of many companies with hospital agreements that allow the sharing of personally identifiable medical data to test products used in treatment and operations.

(cut)

Amazon, Google, IBM and Microsoft are vying for hospitals’ business in the cloud storage market in part by offering algorithms and technology features. To create and launch algorithms, tech companies are striking separate deals for access to medical-record data for research, development and product pilots.

The Health Insurance Portability and Accountability Act, or HIPAA, lets hospitals confidentially send data to business partners related to health insurance, medical devices and other services.

The law requires hospitals to notify patients about health-data uses, but they don’t have to ask for permission.

Data that can identify patients—including name and Social Security number—can’t be shared unless such records are needed for treatment, payment or hospital operations. Deals with tech companies to develop apps and algorithms can fall under these broad umbrellas. Hospitals aren’t required to notify patients of specific deals.

The info is here.