Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Data Mining. Show all posts
Showing posts with label Data Mining. Show all posts

Saturday, February 22, 2020

Hospitals Give Tech Giants Access to Detailed Medical Records

Melanie Evans
The Wall Street Journal
Originally published 20 Jan 20

Here is an excerpt:

Recent revelations that Alphabet Inc.’s Google is able to tap personally identifiable medical data about patients, reported by The Wall Street Journal, has raised concerns among lawmakers, patients and doctors about privacy.

The Journal also recently reported that Google has access to more records than first disclosed in a deal with the Mayo Clinic.

Mayo officials say the deal allows the Rochester, Minn., hospital system to share personal information, though it has no current plans to do so.

“It was not our intention to mislead the public,” said Cris Ross, Mayo’s chief information officer.

Dr. David Feinberg, head of Google Health, said Google is one of many companies with hospital agreements that allow the sharing of personally identifiable medical data to test products used in treatment and operations.

(cut)

Amazon, Google, IBM and Microsoft are vying for hospitals’ business in the cloud storage market in part by offering algorithms and technology features. To create and launch algorithms, tech companies are striking separate deals for access to medical-record data for research, development and product pilots.

The Health Insurance Portability and Accountability Act, or HIPAA, lets hospitals confidentially send data to business partners related to health insurance, medical devices and other services.

The law requires hospitals to notify patients about health-data uses, but they don’t have to ask for permission.

Data that can identify patients—including name and Social Security number—can’t be shared unless such records are needed for treatment, payment or hospital operations. Deals with tech companies to develop apps and algorithms can fall under these broad umbrellas. Hospitals aren’t required to notify patients of specific deals.

The info is here.

Saturday, September 15, 2018

Social Science One And How Top Journals View The Ethics Of Facebook Data Research

Kalev Leetaru
Forbes.com
Originally posted on August 13, 2018

Here is an excerpt:

At the same time, Social Science One’s decision to leave all ethical questions TBD and to eliminate the right to informed consent or the ability to opt out of research fundamentally redefines what it means to conduct research in the digital era, normalizing the removal of these once sacred ethical tenets. Given the refusal of one of its committee members to provide replication data for his own study and the statement by another committee member that “I have articulated the argument that ToS are not, and should not be considered, ironclad rules binding the activities of academic researchers. … I don't think researchers should reasonably be expected to adhere to such conditions, especially at a time when officially sanctioned options for collecting social media data are disappearing left and right,” the result is an ethically murky landscape in which it is unclear just where Social Science One draws the line at what it will or will not permit.

Given Facebook’s new focus on “privacy first” I asked the company whether it would commit to offering its two billion users a new profile setting allowing them to opt out of having their data made available to academic researchers such as Social Science One. As it has repeatedly done in the past, the company declined to comment.

The info is here.

Friday, September 7, 2018

23andMe's Pharma Deals Have Been the Plan All Along

Megan Molteni
www.wired.com
Originally posted August 3, 2018

Here is an excerpt:

So last week’s announcement that one of the world’s biggest drugmakers, GlaxoSmithKline, is gaining exclusive rights to mine 23andMe’s customer data for drug targets should come as no surprise. (Neither should GSK’s $300 million investment in the company). 23andMe has been sharing insights gleaned from consented customer data with GSK and at least six other pharmaceutical and biotechnology firms for the past three and a half years. And offering access to customer information in the service of science has been 23andMe’s business plan all along, as WIRED noted when it first began covering the company more than a decade ago.

But some customers were still surprised and angry, unaware of what they had already signed (and spat) away. GSK will receive the same kind of data pharma partners have generally received—summary level statistics that 23andMe scientists gather from analyses on de-identified, aggregate customer information—though it will have four years of exclusive rights to run analyses to discover new drug targets. Supporting this kind of translational work is why some customers signed up in the first place. But it’s clear the days of blind trust in the optimistic altruism of technology companies are coming to a close.

“I think we’re just operating now in a much more untrusting environment,” says Megan Allyse, a health policy researcher at the Mayo Clinic who studies emerging genetic technologies. “It’s no longer enough for companies to promise to make people healthy through the power of big data.”

The info is here.

Friday, June 15, 2018

Tech giants need to build ethics into AI from the start

James Titcomb
The Telegraph
Originally posted May 13, 2018

Here is an excerpt:

But excitement about the software soon turned to comprehending the ethical minefield it created. Google’s initial demo gave no indication that the person on the other end of the phone would be alerted that they were talking to a robot. The software even had human-like quirks built into it, stopping to say “um” and “mm-hmm”, a quality designed to seem cute but that ended up appearing more deceptive.

Some found the whole idea that a person should have to go through an artificial conversation with a robot somewhat demeaning; insulting even.

After a day of criticism, Google attempted to play down some of the concerns. It said the technology had no fixed release date, would take into account people’s concerns and promised to ensure that the software identified itself as such at the start of every phone call.

But the fact that it did not do this immediately was not a promising sign. The last two years of massive data breaches, evidence of Russian propaganda campaigns on social media and privacy failures have proven what should always have been obvious: that the internet has as much power to do harm as good. Every frontier technology now needs to be built with at least some level of paranoia; some person asking: “How could this be abused?”

The information is here.

Tuesday, May 15, 2018

Google code of ethics on military contracts could hinder Pentagon work

Brittany De Lea
FoxBusiness.com
Originally published April 13, 2018

Google is among the frontrunners for a lucrative, multibillion dollar contract with the Pentagon, but ethical concerns among some of its employees may pose a problem.

The Defense Department’s pending cloud storage contract, known as Joint Enterprise Defense Infrastructure (JEDI), could span a decade and will likely be its largest yet – valued in the billions of dollars. The department issued draft requests for proposals to host sensitive and classified information and will likely announce the winner later this year.

While Google, Microsoft, Amazon and Oracle are viewed as the major contenders for the job, Google’s employees have voiced concern about creating products for the U.S. government. More than 3,000 of the tech giant’s employees signed a letter, released this month, addressed to company CEO Sundar Pichai, protesting involvement in a Pentagon pilot program called Project Maven.

“We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology,” the letter, obtained by The New York Times, read.

The article is here.

Saturday, May 5, 2018

Deep learning: Why it’s time for AI to get philosophical

Catherine Stinson
The Globe and Mail
Originally published March 23, 2018

Here is an excerpt:

Another kind of effort at fixing AI’s ethics problem is the proliferation of crowdsourced ethics projects, which have the commendable goal of a more democratic approach to science. One example is DJ Patil’s Code of Ethics for Data Science, which invites the data-science community to contribute ideas but doesn’t build up from the decades of work already done by philosophers, historians and sociologists of science. Then there’s MIT’s Moral Machine project, which asks the public to vote on questions such as whether a self-driving car with brake failure ought to run over five homeless people rather than one female doctor. Philosophers call these “trolley problems” and have published thousands of books and papers on the topic over the past half-century. Comparing the views of professional philosophers with those of the general public can be eye-opening, as experimental philosophy has repeatedly shown, but simply ignoring the experts and taking a vote instead is irresponsible.

The point of making AI more ethical is so it won’t reproduce the prejudices of random jerks on the internet. Community participation throughout the design process of new AI tools is a good idea, but let’s not do it by having trolls decide ethical questions. Instead, representatives from the populations affected by technological change should be consulted about what outcomes they value most, what needs the technology should address and whether proposed designs would be usable given the resources available. Input from residents of heavily policed neighbourhoods would have revealed that a predictive policing system trained on historical data would exacerbate racial profiling. Having a person of colour on the design team for that soap dispenser should have made it obvious that a peachy skin tone detector wouldn’t work for everyone. Anyone who has had a stalker is sure to notice the potential abuses of selfie drones. Diversifying the pool of talent in AI is part of the solution, but AI also needs outside help from experts in other fields, more public consultation and stronger government oversight.

The information is here.

Tuesday, April 3, 2018

Cambridge Analytica: You Can Have My Money but Not My Vote

Emily Feng-Gu
Practical Ethics
Originally posted March 31, 2018

Here is an excerpt:

On one level, the Cambridge Analytica scandal concerns data protection, privacy, and informed consent. The data involved was not, as Facebook insisted, obtained via a ‘breach’ or a ‘leak’. User data was as safe as it had always been – which is to say, not very safe at all. At the time, the harvesting of data, including that of unconsenting Facebook friends, by third-party apps was routine policy for Facebook, provided it was used only for academic purposes. Cambridge researcher and creator of the third-party app in question, Aleksandr Kogan, violated the agreement only when the data was passed onto Cambridge Analytica. Facebook failed to protect its users’ data privacy, that much is clear.

But are risks like these transparent to users? There is a serious concern about informed consent in a digital age. Most people are unlikely to have the expertise necessary to fully understand what it means to use online and other digital services.  Consider Facebook: users sign up for an ostensibly free social media service. Facebook did not, however, accrue billions in revenue by offering a service for nothing in return; they profit from having access to large amounts of personal data. It is doubtful that the costs to personal and data privacy are made clear to users, some of which are children or adolescents. For most people, the concept of big data is likely to be nebulous at best. What does it matter if someone has access to which Pages we have Liked? What exactly does it mean for third-party apps to be given access to data? When signing up to Facebook, I hazard that few people imagined clicking ‘I agree’ could play a role in attempts to influence election outcomes. A jargon laden ‘terms and conditions’ segment is not enough to inform users regarding what precisely it is they are consenting to.

The blog post is here.

Saturday, March 24, 2018

Facebook employs psychologist whose firm sold data to Cambridge Analytica

Paul Lewis and Julia Carrie Wong
The Guardian
Originally published March 18, 2018

Here are two excerpts:

The co-director of a company that harvested data from tens of millions of Facebook users before selling it to the controversial data analytics firms Cambridge Analytica is currently working for the tech giant as an in-house psychologist.

Joseph Chancellor was one of two founding directors of Global Science Research (GSR), the company that harvested Facebook data using a personality app under the guise of academic research and later shared the data with Cambridge Analytica.

He was hired to work at Facebook as a quantitative social psychologist around November 2015, roughly two months after leaving GSR, which had by then acquired data on millions of Facebook users.

Chancellor is still working as a researcher at Facebook’s Menlo Park headquarters in California, where psychologists frequently conduct research and experiments using the company’s vast trove of data on more than 2 billion users.

(cut)

In the months that followed the creation of GSR, the company worked in collaboration with Cambridge Analytica to pay hundreds of thousands of users to take the test as part of an agreement in which they agreed for their data to be collected for academic use.

However, the app also collected the information of the test-takers’ Facebook friends, leading to the accumulation of a data pool tens of millions strong.

That data sold to Cambridge Analytica as part of a commercial agreement.

Facebook’s “platform policy” allowed only collection of friends’ data to improve user experience in the app and barred it being sold on or used for advertising.

The information is here.

Monday, November 20, 2017

Best-Ever Algorithm Found for Huge Streams of Data

Kevin Hartnett
Wired Magazine
Originally published October 29, 2017

Here is an excerpt:

Computer programs that perform these kinds of on-the-go calculations are called streaming algorithms. Because data comes at them continuously, and in such volume, they try to record the essence of what they’ve seen while strategically forgetting the rest. For more than 30 years computer scientists have worked to build a better streaming algorithm. Last fall a team of researchers invented one that is just about perfect.

“We developed a new algorithm that is simultaneously the best” on every performance dimension, said Jelani Nelson, a computer scientist at Harvard University and a co-author of the work with Kasper Green Larsen of Aarhus University in Denmark, Huy Nguyen of Northeastern University and Mikkel Thorup of the University of Copenhagen.

This best-in-class streaming algorithm works by remembering just enough of what it’s seen to tell you what it’s seen most frequently. It suggests that compromises that seemed intrinsic to the analysis of streaming data are not actually necessary. It also points the way forward to a new era of strategic forgetting.

Monday, November 6, 2017

Is It Too Late For Big Data Ethics?

Kalev Leetaru
Forbes.com
Originally published October 16, 2017

Here is an excerpt:

AI researchers are rushing to create the first glimmers of general AI and hoping for the key breakthroughs that take us towards a world in which machines gain consciousness. The structure of academic IRBs means that little of this work is subject to ethical review of any kind and its highly technical nature means the general public is little aware of the rapid pace of progress until it comes into direct life-or-death contact with consumers such as driverless cars.

Could industry-backed initiatives like one announced by Bloomberg last month in partnership with BrightHive and Data for Democracy be the answer? It all depends on whether companies and organizations actively infuse these values into the work they perform and sponsor or whether these are merely public relations campaigns for them. As I wrote last month, when I asked the organizers of a recent data mining workshop as to why they did not require ethical review or replication datasets for their submissions, one of the organizers, a Bloomberg data scientist, responded only that the majority of other ACM computer science conferences don’t either. When asked why she and her co-organizers didn’t take a stand with their own workshop to require IRB review and replication datasets even if those other conferences did not, in an attempt to start a trend in the field, she would only repeat that such requirements are not common to their field. When asked whether Bloomberg would be requiring its own data scientists to adhere to its new data ethics initiative and/or mandate that they integrate its principles into external academic workshops they help organize, a company spokesperson said they would try to offer comment, but had nothing further to add after nearly a week.

The article is here.

Friday, December 30, 2016

The ethics of algorithms: Mapping the debate

Brent Daniel Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter, Luciano Floridi
Big Data and Society
DOI: 10.1177/2053951716679679, Dec 2016

Abstract

In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

The article is here.

Thursday, October 30, 2014

Are You A Hysteric, Or A Sociopath? Welcome to the Privacy Debate

By Irina Raicu
Ethical Issues in the Online World
Originally posted October 7, 2014

Whether you’re reading about the latest data-mining class action lawsuit through your Google Glass or relaxing on your front porch waving at your neighbors, you probably know that there’s a big debate in this country about privacy.  Some say privacy is important. Some say it’s dead.  Some say kids want it, or not. Some say it’s a relatively recent phenomenon whose time, by the way, has passed—a slightly opaque blip in our history as social animals. Others say it’s a human right without which many other rights would be impossible to maintain.

It’s a much-needed discussion—but one in which the tone is often not conducive to persuasion, and therefore progress.  If you think concerns about information privacy are overrated and might become an obstacle to the development of useful tools and services, you may hear yourself described as a [Silicon Valley] sociopath or a heartless profiteer.  If you believe that privacy is important and deserves protection, you may be called a “privacy hysteric.”

The entire article is here.

Wednesday, November 13, 2013

The Real Privacy Problem

As Web companies and government agencies analyze ever more information about our lives, it’s tempting to respond by passing new privacy laws or creating mechanisms that pay us for our data. Instead, we need a civic solution, because democracy is at risk.

By Evgeny Morozov on October 22, 2013
MIT Technology Review

Here is an excerpt:

First, let’s address the symptoms of our current malaise. Yes, the commercial interests of technology companies and the policy interests of government agencies have converged: both are interested in the collection and rapid analysis of user data. Google and Facebook are compelled to collect ever more data to boost the effectiveness of the ads they sell. Government agencies need the same data—they can collect it either on their own or in coöperation with technology companies—to pursue their own programs.

Many of those programs deal with national security. But such data can be used in many other ways that also undermine privacy. The Italian government, for example, is using a tool called the redditometro, or income meter, which analyzes receipts and spending patterns to flag people who spend more than they claim in income as potential tax cheaters. Once mobile payments replace a large percentage of cash transactions—with Google and Facebook as intermediaries—the data collected by these companies will be indispensable to tax collectors. Likewise, legal academics are busy exploring how data mining can be used to craft contracts or wills tailored to the personalities, characteristics, and past behavior of individual citizens, boosting efficiency and reducing malpractice.

The updated story is here.

Friday, February 10, 2012

Hospitals Mine Their Patients' Records In Search Of Customers

Critics say hospitals cherry pick best-paying patients

By Phil Galewitz
Kaiser Health News
Originally published February 5, 2012

When the oversized postcard arrived last August from Provena St. Joseph Medical Center promoting a lung cancer screening for current or former smokers over 55, Steven Boyd wondered how the hospital had found him.

Boyd, 59, of Joliet, Ill., had smoked for decades, as had his wife, Karol.

Provena didn't send the mailing to everyone who lived near the hospital, just those who had a stronger likelihood of having smoked based on their age, income, insurance status and other demographic criteria.

The nonprofit center is one of a growing number of hospitals using their patients' health and financial records to help pitch their most lucrative services, such as cancer, heart and orthopedic care. As part of these direct mail campaigns, they are also buying detailed information about local residents compiled by consumer marketing firms — everything from age, income and marital status to shopping habits and whether they have children or pets at home.

Hospitals say they are promoting needed services, such as cancer screenings and cholesterol tests, but they often use the data to target patients with private health insurance, which typically pay higher rates than government coverage. At an industry conference last year, Provena Health marketing executive Lisa Lagger said such efforts had helped attract higher-paying patients, including those covered by "profitable Blue Cross and less Medicare."

Strategy Draws Fire

While the strategies are increasing revenues, they are drawing fire from patient advocates and privacy groups, who criticize the hospitals for using private medical records to pursue profits.

Doug Heller, executive director of Consumer Watchdog, a California-based consumer advocacy group, says he is bothered by efforts to "cherry pick" the best-paying patients.

"When marketing is picking and choosing based on people's financial status, it is inherently discriminating against patients who have every right and need for medical information," Heller says. "This is another example of how our health system has gone off the rails."

Deven McGraw, director of the health privacy project at the Center for Democracy and Technology in Washington, says federal law allows hospitals to use confidential medical records to inform patients about things that may help them. 

The whole story is here.