Welcome to the Nexus of Ethics, Psychology, Morality, Philosophy and Health Care

Welcome to the nexus of ethics, psychology, morality, technology, health care, and philosophy
Showing posts with label Data Breach. Show all posts
Showing posts with label Data Breach. Show all posts

Friday, September 22, 2023

Police are Getting DNA Data from People who Think They Opted Out

Jordan Smith
The Intercept
Originally posted 18 Aug 23

Here is an excerpt:

The communications are a disturbing example of how genetic genealogists and their law enforcement partners, in their zeal to close criminal cases, skirt privacy rules put in place by DNA database companies to protect their customers. How common these practices are remains unknown, in part because police and prosecutors have fought to keep details of genetic investigations from being turned over to criminal defendants. As commercial DNA databases grow, and the use of forensic genetic genealogy as a crime-fighting tool expands, experts say the genetic privacy of millions of Americans is in jeopardy.

Moore did not respond to The Intercept’s requests for comment.

To Tiffany Roy, a DNA expert and lawyer, the fact that genetic genealogists have accessed private profiles — while simultaneously preaching about ethics — is troubling. “If we can’t trust these practitioners, we certainly cannot trust law enforcement,” she said. “These investigations have serious consequences; they involve people who have never been suspected of a crime.” At the very least, law enforcement actors should have a warrant to conduct a genetic genealogy search, she said. “Anything less is a serious violation of privacy.”

(cut)

Exploitation of the GEDmatch loophole isn’t the only example of genetic genealogists and their law enforcement partners playing fast and loose with the rules.

Law enforcement officers have used genetic genealogy to solve crimes that aren’t eligible for genetic investigation per company terms of service and Justice Department guidelines, which say the practice should be reserved for violent crimes like rape and murder only when all other “reasonable” avenues of investigation have failed. In May, CNN reported on a U.S. marshal who used genetic genealogy to solve a decades-old prison break in Nebraska. There is no prison break exception to the eligibility rules, Larkin noted in a post on her website. “This case should never have used forensic genetic genealogy in the first place.”

A month later, Larkin wrote about another violation, this time in a California case. The FBI and the Riverside County Regional Cold Case Homicide Team had identified the victim of a 1996 homicide using the MyHeritage database — an explicit violation of the company’s terms of service, which make clear that using the database for law enforcement purposes is “strictly prohibited” absent a court order.

“The case presents an example of ‘noble cause bias,’” Larkin wrote, “in which the investigators seem to feel that their objective is so worthy that they can break the rules in place to protect others.”


My take:

Forensic genetic genealogists have been skirting GEDmatch privacy rules by searching users who explicitly opted out of sharing DNA with law enforcement. This means that police can access the DNA of people who thought they were protecting their privacy by opting out of law enforcement searches.

The practice of forensic genetic genealogy has been used to solve a number of cold cases, but it has also raised concerns about privacy and civil liberties. Some people worry that the police could use DNA data to target innocent people or to build a genetic database of the entire population.

GEDmatch has since changed its privacy policy to make it more difficult for police to access DNA data from users who have opted out. However, the damage may already be done. Police have already used GEDmatch data to solve dozens of cases, and it is unclear how many people have had their DNA data accessed without their knowledge or consent.

Friday, June 23, 2023

In the US, patient data privacy is an illusion

Harlan M Krumholz
Opinion
BMJ 2023;381:p1225

Here is an excerpt:

The regulation allows anyone involved in a patient’s care to access health information about them. It is based on the paternalistic assumption that for any healthcare provider or related associate to be able to provide care for a patient, unfettered access to all of that individual’s health records is required, regardless of the patient’s preference. This provision removes control from the patient’s hands for choices that should be theirs alone to make. For example, the pop-up covid testing service you may have used can claim to be an entity involved in your care and gain access to your data. This access can be bought through many for-profit companies. The urgent care centre you visited for your bruised ankle can access all your data. The team conducting your prenatal testing is considered involved in your care and can access your records. Health insurance companies can obtain all the records. And these are just a few examples.

Moreover, health systems legally transmit sensitive information with partners, affiliates, and vendors through Business Associate Agreements. But patients may not want their sensitive information disseminated—they may not want all their identified data transmitted to a third party through contracts that enable those companies to sell their personal information if the data are de-identified. And importantly, with all the advances in data science, effectively de-identifying detailed health information is almost impossible.

HIPAA confers ample latitude to these third parties. As a result, companies make massive profits from the sale of data. Some companies claim to be able to provide comprehensive health information on more than 300 million Americans—most of the American public—for a price. These companies' business models are legal, yet most patients remain in the dark about what may be happening to their data.

However, massive accumulations of medical data do have the potential to produce insights into medical problems and accelerate progress towards better outcomes. And many uses of a patient’s data, despite moving throughout the healthcare ecosystem without their knowledge, may nevertheless help advance new diagnostics and therapeutics. The critical questions surround the assumptions people should have about their health data and the disclosures that should be made before a patient speaks with a health professional. Should each person be notified before interacting with a healthcare provider about what may happen with the information they share or the data their tests reveal? Are there new technologies that could help patients regain control over their data?

Although no one would relish a return to paper records, that cumbersome system at least made it difficult for patients’ data to be made into a commodity. The digital transformation of healthcare data has enabled wonderous breakthroughs—but at the cost of our privacy. And as computational power and more clever means of moving and organising data emerge, the likelihood of permission-based privacy will recede even further.

Friday, June 15, 2018

Tech giants need to build ethics into AI from the start

James Titcomb
The Telegraph
Originally posted May 13, 2018

Here is an excerpt:

But excitement about the software soon turned to comprehending the ethical minefield it created. Google’s initial demo gave no indication that the person on the other end of the phone would be alerted that they were talking to a robot. The software even had human-like quirks built into it, stopping to say “um” and “mm-hmm”, a quality designed to seem cute but that ended up appearing more deceptive.

Some found the whole idea that a person should have to go through an artificial conversation with a robot somewhat demeaning; insulting even.

After a day of criticism, Google attempted to play down some of the concerns. It said the technology had no fixed release date, would take into account people’s concerns and promised to ensure that the software identified itself as such at the start of every phone call.

But the fact that it did not do this immediately was not a promising sign. The last two years of massive data breaches, evidence of Russian propaganda campaigns on social media and privacy failures have proven what should always have been obvious: that the internet has as much power to do harm as good. Every frontier technology now needs to be built with at least some level of paranoia; some person asking: “How could this be abused?”

The information is here.

Friday, March 23, 2018

Mark Zuckerberg Has No Way Out of Facebook's Quagmire

Leonid Bershidsky
Bloomberg News
Originally posted March 21, 2018

Here is an excerpt:

"Making sure time spent on Facebook is time well spent," as Zuckerberg puts it, should lead to the collection of better-quality data. If nobody is setting up fake accounts to spread disinformation, users are more likely to be their normal selves. Anyone analyzing these healthier interactions will likely have more success in targeting commercial and, yes, political offerings to real people. This would inevitably be a smaller yet still profitable enterprise, and no longer a growing one, at least in the short term. But the Cambridge Analytica scandal shows people may not be okay with Facebook's data gathering, improved or not.

The scandal follows the revelation (to most Facebook users who read about it) that, until 2015, application developers on the social network's platform were able to get information about a user's Facebook friends after asking permission in the most perfunctory way. The 2012 Obama campaign used this functionality. So -- though in a more underhanded way -- did Cambridge Analytica, which may or may not have used the data to help elect President Donald Trump.

Many people are angry at Facebook for not acting more resolutely to prevent CA's abuse, but if that were the whole problem, it would have been enough for Zuckerberg to apologize and point out that the offending functionality hasn't been available for several years. The #deletefacebook campaign -- now backed by WhatsApp co-founder Brian Acton, whom Facebook made a billionaire -- is, however, powered by a bigger problem than that. People are worried about the data Facebook is accumulating about them and about how these data are used. Facebook itself works with political campaigns to help them target messages; it did so for the Trump campaign, too, perhaps helping it more than CA did.

The article is here.

First Question: Should you stop using Facebook because they violated your trust?

Second Question: Is Facebook a defective product?

Facebook Woes: Data Breach, Securities Fraud, or Something Else?

Matt Levine
Bloomberg.com
Originally posted March 21, 2018

Here is an excerpt:

But the result is always "securities fraud," whatever the nature of the underlying input. An undisclosed data breach is securities fraud, but an undisclosed sexual-harassment problem or chicken-mispricing conspiracy will get you to the same place. There is an important practical benefit to a legal regime that works like this: It makes it easy to punish bad behavior, at least by public companies, because every sort of bad behavior is also securities fraud. You don't have to prove that the underlying chicken-mispricing conspiracy was illegal, or that the data breach was due to bad security procedures. All you have to prove is that it happened, and it wasn't disclosed, and the stock went down when it was. The evaluation of the badness is in a sense outsourced to the market: We know that the behavior was illegal, not because there was a clear law against it, but because the stock went down. Securities law is an all-purpose tool for punishing corporate badness, a one-size-fits-all approach that makes all badness commensurable using the metric of stock price. It has a certain efficiency.

On the other hand it sometimes makes me a little uneasy that so much of our law ends up working this way. "In a world of dysfunctional government and pervasive financial capitalism," I once wrote, "more and more of our politics is contested in the form of securities regulation." And: "Our government's duty to its citizens is mediated by their ownership of our public companies." When you punish bad stuff because it is bad for shareholders, you are making a certain judgment about what sort of stuff is bad and who is entitled to be protected from it.

Anyway Facebook Inc. wants to make it very clear that it did not suffer a data breach. When a researcher got data about millions of Facebook users without those users' explicit permission, and when the researcher turned that data over to Cambridge Analytica for political targeting in violation of Facebook's terms, none of that was a data breach. Facebook wasn't hacked. What happened was somewhere between a contractual violation and ... you know ... just how Facebook works? There is some splitting of hairs over this, and you can understand why -- consider that SEC guidance about when companies have to disclose data breaches -- but in another sense it just doesn't matter. You don't need to know whether the thing was a "data breach" to know how bad it was. You can just look at the stock price. The stock went down...

The article is here.

Monday, March 27, 2017

Healthcare Data Breaches Up 40% Since 2015

Alexandria Wilson Pecci
MedPage Today
Originally posted February 26, 2017

Here is an excerpt:

Broken down by industry, hacking was the most common data breach source for the healthcare sector, according to data provided to HealthLeaders Media by the Identity Theft Resource Center. Physical theft was the biggest breach category for healthcare in 2015 and 2014.

Insider theft and employee error/negligence tied for the second most common data breach sources in 2016 in the health industry. In addition, insider theft was a bigger problem in the healthcare sector than in other industries, and has been for the past five years.

Insider theft is alleged to have been at play in the Jackson Health System incident. Former employee Evelina Sophia Reid was charged in a fourteen-count indictment with conspiracy to commit access device fraud, possessing fifteen or more unauthorized access devices, aggravated identity theft, and computer fraud, the Department of Justice said. Prosecutors say that her co-conspirators used the stolen information to file fraudulent tax returns in the patients' names.

The article is here.

Wednesday, January 13, 2016

Your health records are supposed to be private. They aren’t.

By Charles Ornstein
The Washington Post
December 30, 2015

Here is an excerpt:

In each story, a common theme emerged: HIPAA wasn’t working the way we expect. And the agency charged with enforcing it, the HHS office for civil rights, wasn’t taking aggressive action against those who violated the law.

We all know HIPAA, whether we recognize the acronym or not. It’s what requires us to stand behind a line, away from other customers, at the pharmacy counter or when checking in at the doctor’s office. It is the reason we get privacy declaration forms to sign whenever we visit a new medical provider. It is used to scare health-care workers, telling them that if they improperly disclose others’ information, they could pay a steep fine or even go to jail.

But in reality, it is a toothless tiger. Unless you’re famous, most hospitals and clinics don’t keep tabs on who looks at your records if you don’t complain. And even though the civil rights office can impose large fines, it rarely does: It received nearly 18,000 complaints in 2014 but took only six formal actions that year. A recent report from the HHS inspector general said the office wasn’t keeping track of repeat offenders, much less doing anything about them.

The story is here.

Monday, January 11, 2016

Cyber security: Attack of the health hackers

Kara Scannell and Gina Chon
FT.com
Originally published December 21, 2015

Here is an excerpt:

Hackers accessed over 100m health records — 100 times more than ever before — last year. Eight of the 10 largest hacks into any type of healthcare provider happened this year, according to the US Department of Health and Human Services.

Insurers scrambled to hire cyber security companies to scrub their systems. Premera Blue Cross, CareFirst BlueCross BlueShield, and Excellus Health Plan announced breaches affecting at least 22m individuals in total since March, including hacks that stretched back more than a year. Investigators told the FT that they believe some of the hacks are related and trace back to China.

The insurers face multiple investigations from state insurance regulators and attorneys-general and some could face fines for failing to comply with state data privacy laws, while federal law enforcement agencies are investigating who is behind the hacks.

The article is here.

Tuesday, April 7, 2015

Premera Blue Cross Breach May Have Exposed 11 Million Customers' Medical And Financial Data

By Kate Vinton
Forbes
Originally published March 17, 2015

Medical and financial data belonging to as many as 11 million Premera Blue Cross customers may have been exposed in a breach discovered on the same day as the Anthem breach, the health insurance company announced Tuesday.

Premera discovered the breach on January 29, 2015. Working with both Mandiant and the FBI to investigate the attack, the company discovered that the initial attack occurred on May 5, 2014. Premera Blue Cross and Premera Blue Cross Blue Shield of Alaska were both impacted, in addition to affiliate brands Vivacity and Connexion Insurance Solutions. Additionally, other Blue Cross Blue Shield customers in Washington and Alaska may have been affected by the breach.

The entire article is here.

Healthcare Accounted For Almost Half Of 2014 Client Breaches

By Christine Kern
Health IT Outcomes
Originally published March 12, 2015

A Kroll study has found the healthcare industry accounted for 49 percent of the company’s “client events” during 2014, followed by business services (retail, insurance, and financial services) at 26 percent, and higher education at 11 percent. The study further found malicious intent breach events increased while those caused by human error declined.

Tuesday, July 29, 2014

Millions of electronic medical records breached

New U.S. government data shows that 32 million residents affected since 2009.

By Ronald Campbell and Deborah Schoch
The Oregon Country Register
Published: July 7, 2014

Thieves, hackers and careless workers have breached the medical privacy of nearly 32 million Americans, including 4.6 million Californians, since 2009.

Those numbers, taken from new U.S. Health & Human Services Department data, underscore a vulnerability of electronic health records.

These records are more detailed than most consumer credit or banking files and could open the door to widespread identity theft, fraud, or worse.

The entire article is here.

Monday, April 7, 2014

Nearly half of identity thefts involve medical data

By Adam Levin
Credit.com
Posted on Market Watch March 18, 2014

Here are two excerpts:

“Despite concerns about employee negligence and the use of insecure mobile devices, 88 percent of organizations permit employees and medical staff to use their own mobile devices such as smartphones or tablets to connect to their organization’s networks or enterprise systems such as email. Similar to last year more than half of (these) organizations are not confident that the personally-owned mobile devices or BYOD are secure.”

According to the report, very few organizations require their employees to install anti-virus/anti-malware software on their smartphones or tablets, scan them for viruses and malware, or scan and remove all mobile apps that present a security threat before allowing them to be connected their networks or systems.

(cut)

Medical identity theft is on the rise, just as the rise in criminal breaches of health care providers is spiking. Medical identity theft accounted for 43% of all identity theft reported in 2013, and the U.S. Department of Health and Human Services estimates that somewhere between 27.8 and 67.7 million people’s medical records have been breached since 2009 (and that’s before the flawed rollout of the Affordable Care Act).

The entire article is here.

Wednesday, April 2, 2014

US Health Information Breaches Up 137%

By Roger Collier
CMAJ News
Originally posted March 5, 2014

More than seven million health records in the United States were affected by data breaches in 2013, an increase of 137% over the previous year, according to the annual breach report by Redspin, an information security company based in Carpinteria, California.

Since 2009, there has been a rapid rise in the adoption of electronic health records in the US. There have also been 804 breaches of health information affecting nearly 30 million patient health records reported to the Secretary of Health and Human Services, as required by law.

The entire article is here.

Friday, February 21, 2014

HIPAA data breaches climb 138 percent

By Erin McCann
Healthcareitnews.com
Originally posted February 6, 2014

When talking HIPAA privacy and security, the numbers do most of the talking.

Take 29.3 million, for instance, the number of patient health records compromised in a HIPAA data breach since 2009, or 138 percent, the percent jump in the number of health records breached just from 2012.

These numbers, compiled in a February 2014 breach report by healthcare IT security firm Redspin, though, don't tell the whole story, as these are numbers reported to the U.S. Department of Health and Human Services by HIPAA covered entities.

The entire article is here.

Thursday, October 17, 2013

More HIPAA enforcement coming

By Erin McCann
Healthcare IT News
Originally published September 24, 2013

When Office for Civil Rights Director Leon Rodriguez took the stage Monday to talk HIPAA at the HIMSS Media and Healthcare IT News Privacy and Security Forum, the timing was perfect.
With the HIPAA Omnibus Final Rule taking effect Sept. 23, Rodgriguez talked to the increased enforcement to come, the importance of properly safeguarding patient privacy and the what-not-to-dos, or the breach blunders, that have resulted in hefty monetary penalties for some groups who failed to take patient privacy and security seriously.

"Today is a critical day for the Omnibus," said Rodriguez, who explained that the agency is working to strike a balance between effective enforcement and clearly communicating what all the rules are surrounding patient privacy and security.


Tuesday, June 18, 2013

Large Hospital Breach Caused by Inside Inappropriate Access

Health Data Management
Originally published May 31, 2013

Bon Secours Mary Immaculate Hospital in Suffolk, Va., is notifying about 5,000 patients after discovering a significant amount of inappropriate access to patients’ electronic health records from two employees inside the facility.

“During an April 2013 audit of a patient’s medical record, the health system identified suspicious access that prompted an investigation,” according to a notice the hospital issued. “The investigation revealed that two members of the patient care team accessed patients’ medical records in a manner that was inconsistent with their job functions and hospital procedures, and inconsistent with the training they received regarding appropriate access of patient medical records.”

The entire story is here.

Wednesday, October 10, 2012

Reducing the Risk of a Breach of PHI from Mobile Devices


Latest HHS Fine Hits The Massachusetts Eye and Ear Infirmary

by Rick Kam, ID Experts
Originally published on September 26, 2012

The Massachusetts Eye and Ear Infirmary and Massachusetts Eye and Ear Associates Inc. (MEEI), will pay $1.5 million to the Department of Health and Human Services (HHS) for potential violations of the HIPAA Security Rule. In the HHS release, they explain that it wasn’t just one issue or misstep that led to the fine, but rather a series of errors and inaction.

“…such as conducting a thorough analysis of the risk to the confidentiality of ePHI maintained on portable devices, implementing security measures sufficient to ensure the confidentiality of ePHI that MEEI created, maintained, and transmitted using portable devices, adopting and implementing policies and procedures to restrict access to ePHI to authorized users of portable devices, and adopting and implementing policies and procedures to address security incident identification, reporting, and response.”

The entire story is here.

Monday, September 10, 2012

Cancer Care Group Data Breach Exposes Nearly 55,000 Patients

By Kyle Murphy
EHR Intelligence
Originally published August 28, 2012

In a press release today, Cancer Care Group (Indianapolis, IN) announced that a laptop computer containing its computer server backup media was stolen from an employee’s locked care on July 19, 2012. The breach has potentially exposed the protected health information (PHI) or personally identifiable information (PII) of close to 55,000 individuals, including the organization’s own employees. The latest incident comes less than a month after Apria Healthcare reported a similar incident in Arizona where an employee’s car was broken into and a laptop containing information for 11,000 patients stolen.

The entire story is here.

Tuesday, August 21, 2012

More than 14K affected in Oregon hospital breach

By Beth Walsh
CMIO
Originally published August 6, 2012

Yet another hospital has suffered a data breach. The administration at Oregon Health & Science University Hospital (OHSU) in Portland is sending letters to the families of 702 pediatric patients after a USB drive containing some of their patient information was stolen. In total, data for more than 14,000 patients was stored on the drive, along with information for about 200 OHSU employees.

The entire story is here.

Editorial note: It is advisable to not take patient data home, whether it is stored on a laptop or in some type of portable storage device such as a jump drive.

Monday, July 30, 2012

Healthcare Data Breaches Still Rising

By Deborah Hirsch
HealthTechZone Contributor
Originally published July 19, 2012

Here are some excepts:

The healthcare industry has the highest percentage of data breaches of any sector, according to a report by Symantec. Healthcare also had the highest number of reported breaches, at 43 percent, Patricia Resende reports.

And the costs continue to rise, with each breach costing organizations $5.5 million, and each compromised record, $194, the Symantec study reports. And even though the costs have dropped slightly from several years ago, according to a Ponemon study, healthcare is the one area where they have not. Physicians’ offices and small clinics say they have lost more than 54,000 patient records due to breaches since 2009.

And they’ve occurred all over the country, from Utah, where the files of almost 300,000 Medicaid patients were breached in March, to Boston, where the laptop of a Boston Children’s Hospital employee at a South American conference containing more than 2000 patient records was stolen.

The entire story is here.